Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

vlaiv

Members
  • Posts

    13,045
  • Joined

  • Last visited

  • Days Won

    11

Posts posted by vlaiv

  1. Ok, here are my findings:

    1. There is no light leak so you are ok with that.

    What I suspect is going on with bayer pattern in the darks / bias subs is some sort of white balance in drivers. I'm not familiar with drivers of this particular camera, but if you turned on automatic white balance in drivers - you might want to turn that off.

    I'm suspecting AWB because this camera is 14bit ADC camera - that means that all values in recorded raw sub should be divisible by 4 (16 bit total, 14 bit used - remaining two bits set to 0 - same as number being multiplied by 4). It is not case with your bias and dark files - there are odd values as well as even. This can happen only if values have been changed once they were read out of the camera - for example AWB process in drivers could explain that. AWB could also explain bayer patter. Since there is some offset to pixel value (pixels in bias are not 0, but rather positive value roughly equal for each pixel) - multiplying that with different factors for each color will create different values. Even values in bayer matrix show this could be so - since strongest is blue then red and green is the weakest. If you look at ASI294mc QE curves - it is other way around - green is strongest, red is second and blue is the weakest - which means that they have to be multiplied with certain factors to "even out" giving strengths of pixels in bias/dark that we see.

    In any case - I would recommend turning off auto white balance in drivers. With that turned on, I can't judge if offset of 10 is good value because I can't see actual read out values of pixels but only WB corrected values.

    Another thing - AWB should not mess up your calibration and everything should work fine.

    2. Your subs indeed work fine. Here is comparison on one sub between proper calibration and raw sub without calibration:

    image.png.c50148d10fdb67221fa920b0346660cd.png

    This is the same sub - left has been calibrated by (ligth - avg_dark) / (avg_flat - avg_flat_dark) while right one is without calibration. Both subs were binned x8 to get enough SNR to show features of the image. As you can see - you get pretty decent sub with calibration - no vignetting and it looks clean in comparison to raw sub.

    As you see - another confirmation that above is probably AWB, as that is not going to affect calibration, and indeed calibration works fine.

    3. I would just recommend you in the future to use Gain 120, to turn off AWB and check if offset has good value - and continue taking nice images :D

    HTH

    • Like 3
  2. Nice data, I believe there could be more in there - maybe if you post 32bit tif?

    In any case - here is Gimp 2.10 quick processing:

    m42stacked.thumb.png.c93071c35421bee15798d5d87f9cd802.png

    I'm not sure that flat fielding worked as it should - there is still vignetting in that image. But on the bright side - Running man is visible as well and I think it can be pulled out with a bit more effort and 32bit tif version.

    • Like 2
  3. It's rather late over here so I'll be brief for now, but I'll take a closer look tomorrow in the morning.

    If you are seeing bayer pattern in both bias and darks - it could be that you have a light leak.

    It might seem to you that capping off camera and being in dark room is going to solve light leak problem, but you need to understand that your camera has only AR coated window and that you don't have any UV/IR cut filter on it. In fact here is what transmission of your anti reflexive cover window on camera looks like:

    image.png.a58353a510e272891c1bff59051c8084.png

    It will pass all wavelengths in IR part of spectrum (700nm and above). Another important thing to remember is that plastic is often transparent to IR wavelengths. So just putting plastic cover on your camera and being in dark room might not be enough to prevent IR leak - especially if you have heat sources near camera.

    Best way to be sure you don't have any light or IR leak is to do following: put plastic cover on and then place camera "face down" on wooden table, or better yet - use aluminum foil to make additional layer on top of plastic cover (wrap camera in aluminum foil). I use first approach with wooden table, but some people do it like this:

    IMG_1670.JPG

    I'll have a look at your subs tomorrow to see if I can figure out anything from them.

    Btw, for ASI294 I would use gain 120 - as that puts camera into low read noise domain, and I would raise offset until I get very nice histogram separation from the left side on my bias subs. Not sure what that should be with ASI294, but with my ASI1600 I use offset of 64.

    One way of doing it is to set gain that you are going to use, and then set offset at particular value - for example offset 10 that you already used. Then shoot a bunch of bias subs and see if there are pixels that are too low in value - something like 4ADU or similar and if histogram is well away from left side. If you find that it's not the case - raise offset and repeat. Once you find value that you are happy with - then prepare your darks with those settings (temp, gain, offset and wanted duration) and use same settings when you go out and gather actual data - lights.

    Don't use bias in calibration though - it is not needed - darks contain both dark signal and bias signal so they alone will calibrate rights properly (also some CMOS sensors have issues with bias, and until you check your camera for bias issues - it's best to avoid using it in calibration).

    • Like 3
  4. 47 minutes ago, pez_espada said:

    That would mean that a 6" F8 refractor and 6"F8 Newtonian are expected to perform around the same, I guess. To pickup just an example of the top of my head..

    If they have same central obstruction, which is not the case, if they have same transmission, which is not the case, if they have same F/ratio, which is here the case, if they have same surface quality and correction - strehl ratio - which might or might not be the case and are affected the same by atmoshpere - then aperture rules :D.

    I support the notion that aperture rules, and I'm quite familiar with the idea that newtonians and other mirrored systems in general have slightly lower light grasp then their aperture would suggest - like we shown above - that depends on quality of coatings and size of central obstruction.

    For what is worth, 6" F/8 Newtonian is going to blow away 6" F/8 refractor of the same price class in almost every aspect. It will also be outclassed in almost every aspect by high quality ED/APO 6" F/8 scope, but on the other hand - if you pay that much for a mirror and general construction of newtonian - gap to ED/APO closes real fast (never going to be equal, but very close).

     

    • Like 1
  5. 6 minutes ago, Tommohawk said:

    OK. Currently for NB I use Gain 300 offset 50 (but will switch to 64) and do 300s subs. Sky here is approx class 4 Bortle. So maybe gain 139 and keep with 300s subs? 

    Also, whilst I'm busy picking your brain, what's your thoughts on the USB setting? 

    Could you check one of your NB subs for background sky levels - that should tell us what the deal is. Easiest way is to run stats on selection without any nebulosity and check for median value (that will deal with any stars in selection).

    I believe I have USB setting at 64 as well. I changed it to be the same as offset - simply because I did not notice any difference in my setup when changing that and I did not want to go very high with it.

  6. 5 minutes ago, pez_espada said:

    So all that said I can confirm that a reflector to be a le to compete with a refractor, the former has to be at least 27-30% larger in apertura. 

    The myth becomes truth. 

    Might be so for small apertures, but as aperture gets larger - differences get smaller.

    For example - we have seen that 130mm is equivalent to 112mm of clear aperture, so that is reduction of about 13.8% by diameter.

    If you do the same with for example 8" F/5 newtonian, you will get different result. It has 26% CO, so math will be sqrt((101^2 - (101*0.26)^2) * pi * 0.91^2 / pi) *2 =  ~183.5mm of clear aperture. That is difference of 9% by diameter,

    • Like 1
  7. Indeed - here is quick calculation:

    102mm vs 130mm

    102mm at about 98.5% transmission will be equal to 51^2 * pi * 0.985 = 8048.71mm2

    130mm with 32% Central obstruction and 91% reflectivity on two mirrors will be equal to (65^2 - (65 * 0.32)^2) * pi * 0.91^2 = 9866mm2

    Still larger and therefore more light gathering, but not 3cm larger in diameter, more like only 1cm larger as 9866mm2 is equal to about 112 - 113mm of clear aperture (sqrt(9866 / pi) * 2).

    Add to this that F/5 scope has much tighter critical focus zone - harder to obtain proper focus, especially if scope has single speed 1.25" focuser that is not very precise, and the fact that larger aperture will suffer more from seeing effect and also that strehl numbers of two scopes can differ considerably. If 5.1" newton is mass produced it will probably have system sthrel of about 0.8 and F/11 refractor will probably have over 0.9.

    Put everything together and you get result that you have.

    • Like 1
  8. 12 minutes ago, Tommohawk said:

    he author is very switched on so its hard to believe he's "got it wrong" but maybe as you say it's not primarily for DSO work and has some adaptations which aren't ideal.

    It's not really down to sharpcap as a piece of software. It works well for what it was originally intended. It has more to do with the fact that ZWO has two different drivers and we can't be sure if those drivers talk to the camera in the same way (set same settings). Native drivers and ASCOM drivers. Facts are that:

    1. ASCOM drivers take much more to download than native drivers. Native drivers can achieve 15+ FPS on full frame, but for me ASCOM download lasts almost a full second (so 1fps or less).

    2. We see that subs coming from native drivers and ASCOM drivers are in some way different - mean ADU level does not behave as one would expect if things were "simple". We simply can't avoid conclusion that somehow these two drivers behave differently

    On numerous occasions I've seen people having issues with SharpCap and native drivers when trying to do DSO imaging / long exposure. I can't tell if that happens with SharpCap and Ascom drivers and whether multitude of options in SharpCap (like brightness / contrast / white balance / whatnot) is simply confusing people and that is why they are getting issues.

    For that reason alone I recommend people to stay away from DSO / long exposure imaging with SharpCap. Otherwise I think it is really good piece of software and I recommend it for planetary and EEVA use.
     

    19 minutes ago, Tommohawk said:

    I did wonder if perhaps there was a light leak on the shorter sub. I have a metal lens cap + metal foil but because I'm just tinkering I don't have a dark hood/dark room as I normally would for darks.

    On next session, when you do full set of subs (flats, darks, flat darks and lights) - maybe post one of each so we can try to see if everything is in order and if there is some possible light leak. There is set of "sanity checks" that one can do on the data (histogram clipping, mean adu levels, simple calibration on single frame and looking for irregularities) to determine if everything seems to be in order.

    20 minutes ago, Tommohawk said:

    BTW am I right in thinking you also keep with unity gain (139) for narrowband too? I've always had passable results with 300 gain, but maybe I should change.

    Only benefit that gain of 300 vs gain of 139 will have is lower read noise - ~1.2e vs ~1.7e. That is not major difference. In fact that is only ~x1.42 higher read noise on unity gain. Only impact of read noise is on many short vs few longer subs. Since no one is doing single exposure images and we all stack - it will be the matter of using certain exposure length.

    Rule is simple - use exposure length at which other noise source becomes dominant over read noise. Let's say that Sky/LP noise becomes dominant at 1 minute and read noise of 1.2e. What sort of exposure do you need to get the same result from 1.7e read noise? We've seen that difference is x1.4 and LP noise raises like square root of LP signal which raises linearly with time. This means that you need x2 longer exposure (because sqrt(2) = ~1.41 = 1.7/1.2). You only need to use 2 minute exposures at unity gain and you will get the same result. Using minute and a half will get you like 90% there and not 50% - again it is the story of diminishing returns like sub duration in general - after a certain point, gains from going with longer subs are just too small.

    I don't mind imaging at gain 139 for NB - just use longer exposure.

  9. @Tommohawk

    Here is what I think about it all:

    On 25/01/2020 at 19:24, Tommohawk said:

    So whatever I do with the brightness setting, when doing darks the min value indicated below is only 16. I've tried setting the brightness in Sharpcap, or the offset in NINA to the maximum which is 100, and still this value is only 16. So whats that all about? 

    My position on using sharpcap to do DSO AP is quite clear - don't do it. Use Nina or maybe SIPS or SGP lite - these are all free to use and work well (in fact I did not try Nina, and I stopped using SIPS once I ran into trouble with large files - it mistook my ASI1600 mono subs for color of different size - maybe it was early drivers or maybe it was that particular version of SIPS which is now updated. In any case I use SGPro now, but I think any of listed should be better than SharpCap).

    I'm not saying that SharpCap is not good - I'm just saying it should be used for other purposes - like planetary imaging and EEVA / live stacking.

    Here is example why it's not best option:

    image.png.0fbb472b40c1830616bc5d67614f0747.png

    We have two dark subs taken with same settings except for exposure length. First is one minute and second is 0.64s. But look at Mean value. There simply is no way that one minute exposure has lower mean value than 0.64s one, and certainly not by 50%. It is obvious that camera is working in different regime for these two subs. SharpCap is probably putting camera into mode for fast readout and that changes the way camera works (maybe producing offset issues, or offset instability or whatever).

    On 25/01/2020 at 19:24, Tommohawk said:

    Edit: I tried this initially with gain set at 300 which I normally use for narrowband, and the min value wont go over 16 no matter what the offset. BUT just tried it with Gain at 139, and then the min value is 432, with offset 100. However this is dependent on exposure length - with gain 139 and offset set to 100, I need minimum 8 seconds exposure to get the Min value to exceed 16.

    I still think you should do Gain 139 / Offset 64 for your subs. Indeed this sub has 12 pixels with value of 16, but histogram is well separated from right side:

    image.png.48924c6089719b19bbcaee92be541b42.png

    Here is same sub from my ASI1600 (also 1 minute exposure, gain 139 and offset 64):

    image.png.b39711fe7216089a94bccf7f15201e31.png

    Very similar looking graph. Only difference is that mine has min value of 224 while your has min value 16.

    It could be that your camera has few "cold" pixels. Those pixels have really low value and you need to really pump your gain to raise their value enough. Since you only have 12 of them (according ImageJ and histogram):

    image.png.76cd6f82a1f453a850dbdcefe164b5ca.png

    I don't think you should worry too much about them. Dithering will sort this even if these are dead pixels (and I don't thin they are dead pixel, just cold, since this other sub shows that they indeed can hold value larger than "zero" / base value).

     

  10. 1 minute ago, AndyThilo said:

    Can I use WBPP? Do I just load them in with the flats? Or put them in as bias?

    I'm not familiar with particular PI scripts, so we should really wait for someone to answer that one.

    Flat darks should be "darks for flats" - master flat dark created by stacking and then subtracted from flats prior to their stacking (same workflow as with regular subs where you subtract regular darks).

  11. 1 minute ago, AndyThilo said:

    Conclusion, thank you @vlaiv, I will be deleting my whole dark library and making new ones :). And I really can't see how flats would improve this?

    Good to see that there was only light leak in the darks and that proper darks took care of everything.

    You now know that your flats are working properly. Only real issue with that flat panel is that it does not produce much red part of spectrum and your red histogram peak is quite low. It will not cause too much issues - but take more to compensate for this. Also, your color balance will be really thrown off if you don't normalize your flats prior to doing calibration with them.

    • Like 1
  12. Just now, AndyThilo said:

    Well I’m truly lost 😂

    Don't be. Here is quick guide:

    1. You've probably taken another set of darks? If not - do it, but take every precaution not to have light leak. Take camera off the scope, use camera cover and aluminum foil - place camera "face down" on desk and do darks.

    As far as I can see deltaT is about 35C so you need to do this somewhere where you don't have heating (shed perhaps or basement or ....) as you need ambient temperature to be below 15C.

    Compare that with the darks you have already taken - see if you get same gradient / mean ADU level.

    2. Try calibrating data that you now have with existing flats (these seem ok) and new darks - just to see if it will work (I sort of doubt it - if there were a light leak, it affected both darks and lights) - if it does work properly - problem solved, if not - it is sign of light leak. This means that you need to examine your setup and change some things.

    Light leak will usually come thru extension tubes, or adapters, or filter wheels or OAG. It can be diagnosed by using strong flash light and shooting short darks - which torch shining at different pieces of equipment. Highest mean ADU will tell you where likely leak is. 

  13. 9 minutes ago, Physopto said:

    I might try a figure closer to that next time out. I am at The Galloway Star Camp next month. So a few more hours spent trying out different levels would not come amiss.

     

    Simplest way to test if flats of different histogram peak level are working as they should is to do flat / flat calibration.

    You take set at some peak values like 33% (to be reference value), 50%, 75%, 80%, 90% - do couple of each, and do couple of flat darks for each exposure. Create master flats and then divide master flats with each other (33%/50%, 33%/80%, etc) - you should get uniform gray sub (with noise obviously but it should not have any vignetting nor dust shadows evident).

    • Like 1
  14. 4 minutes ago, Physopto said:

    Hi Vlaiv

    Yes I can see your argument. I always use around 50% FWD for my CCD (QSI 683), ( what QSI recommended I believe). So I aim for around the 25,000e ish. It is a long time since I did any maths/statistics on shot and thermal noise. So I just aim for what seems to work. I would guess it may be different for the various makes of CCD depending on how they work. Things have changed greatly in the last 25 years or so since I did my first degree.

    All I was pointing out is where carastro may have seen or gotten the 33% figure from. I use Maxim DL but have never tried their auto routine so far. I get too little time to waste messing about with experimenting. I beleive in" if it ain't broke, then don't fix it!"

    Derek

    I figured that you just quoted potential source of 33% statement. On the other hand, I just wanted to point out that not all things should be taken as accurate / set in stone without running logic checks first.

    This does not mean that there is no something else that will indeed make 33% recommendation better option for some CCDs out there that I've not taken into account. From what I've seen - figure of 80% works and it works well (I use it) - and logic behind taking flats supports it.

  15. It is certainly worth combining the data in a proper way.

    Unfortunately it is a bit complicated topic and DSS does not have option to combine data in such way.

    Just combining them as if they were same SNR subs can result in improvement but can also result in worse data. Actual result depends on how different SNR is between the subs, and to make matters worse - there is no single SNR value per sub - each pixel in fact has its own SNR value.

    There are couple of things that you can do to address this issue:

    1. Stack better subs to one stack and then stack all subs to other stack - inspect stacks to determine which one is better looking - and use that one.

    2. If you have license of PixInsight - use that as it has weighted average stacking (not ideal, but better than equal weights stacking)

    3. Make two stacks - one of good subs and one of poor subs and then try different weighted combining of those two stacks (simple image arithmetic - 0.8 * good + 0.2 * bad - or other weights, and select combination of weights that gives you best result).

    4. Use algorithm specifically designed to handle such cases. Have a look at this thread first:

    And if you are up to it - I can make you a small tutorial on how to use ImageJ and plugin that I wrote to stack your data with that algorithm.

     

    • Like 1
  16. In fact, I was wrong in my previous post - I did the math incorrectly.

    I used percentages instead of actual numbers (figured it will be easier to understand) - but I should not have done so. SNR involves quadratic relationship (noise is square root of signal) and linearity is not preserved - hence percentages will not match actual numbers. I'll just quickly redo calculation with real numbers to show that this issue is even less than percentages would suggest.

    Let's again take 15k FW. 80% of that is 12000e. Noise / one sigma is square root of that - ~109.55 and we have 3000e until saturation (15000-12000 = 3000e). This means that we in fact have 3000/109.55 = 27.38 sigma. Probability that any single pixel will saturate is not 1% but closer to following statement: "Universe is simply not old enough for this event to ever have happened so far given it's probability and all cameras in the world" :D

    In fact, even if you go for 95% histogram peak, you are unlikely to saturate with real sensor - it is still more than 6 sigma event.

  17. 13 hours ago, Physopto said:

    The target ADU selected should be about 33%  of the saturation level of your camera. This will give the most accurate and noise free raw flats which will result in the best master flat once stacked. Going too high can result in pixels outside the linear range of the CCD and too low can result in poor signal-to-noise in the flat.

    Ok, let's discuss this for a moment.

    It says that 33% of saturation level will result in most noise free raw flats. Is this statement true?

    Most noise free raw flat will be one with best SNR. With flats most dominant type of noise is shot noise. Read noise in comparison is very small (even with old CCD cameras) and due to usual exposure lengths involved, dark current noise is also very small. Therefore we can approximate noise by shot noise associated with light signal.

    Let's now compare two SNR values of usual CCD camera that has 15k FW capacity. One flat at 33% and one flat at 50%. Signal value in 33% flat will have on average value of 5000e. Associated shot noise is sqrt(5000) = ~70.71e and overall SNR is therefore 5000 / sqrt(5000) = 5000 / 70.71e = ~70.71e. On the other hand 50% signal level will be 7500e and associated noise will be ~86.60e and therefore SNR will be ~86.60e.

    We clearly see that there is example where above statement is not true. In fact, if shot noise is dominant noise source - higher signal value always "wins" in SNR and therefore histogram that has peak higher than 33% will result in better SNR and be more noise free than sub with 33% SNR.

    Now let address second part of that statement:

    13 hours ago, Physopto said:

    Going too high can result in pixels outside the linear range of the CCD

    This statement can be interpreted in two ways and we need to address both. In first case it could happen that CCD sensor is not linear in higher range. I can't really argue this case except to say that in recent time that I've been doing astronomy, I have not heard of imaging sensor that is not linear enough over its useful range.

    If you do search on linearity for your particular camera model, I'm pretty sure you will find a graph like this:

    image.png.02839558db3d5be7d3f875d0316ede98.png

    (atik383L+)

    or this:

    image.png.fcc3dbd928c620542511662ea2d5f728.png

    or this for CMOS camera:

    image.png.7511e2f91845dc6b665650503f487ac8.png

    or maybe this:

    image.png.fe986034fc1a2a17cfabf6935619be3b.png

    In second case - it is meant that there could be some sort of saturation and clipping and therefore pixels can register non linear response. This can happen due to shot noise for example. Shot noise can cause signal value to be higher or lower than actual signal level (that is why it is noise). In fact we know what magnitude that noise is, and we can calculate probability that any particular pixel is above saturation point of the sensor.

    Let's take value that I recommend often - 80% and see how likely that any one pixel is above saturation point. We know that one standard deviation is square root of that number or 8.95%. This means that 66.7% of pixels will be within 80% +/- 8.95%, and 95.5% all pixel values will be within 80% +/- 17.9%. This is still within 100% saturation point as 80+17.9 = 97.9%. In fact, if we calculate it precisely 1.27% of pixels has chance to saturate in single pixel. In stack of 20-30 flat subs that will produce negligible error. In fact this will hold true only if flat is perfectly flat - and it never is - there is vignetting and only central part of flat need to be taken into account as other parts of flat usually have lower values and are less likely to saturate  (disclaimer: I used gaussian approximation for poisson distribution when large numbers are involved).

    Edit: I've made mistake of using percentages where quadratic relationship is involved (things are not linear) - fact is that this percentage is even lower - look at my following post for details

    So this statement is certainly true - going too high will saturate, but as we see, even going as high as 80% will make very few pixels saturate (in reality less than 1%) in single subs and resulting error from stacking 20-30 subs will be minimal.

  18. 1 hour ago, AndyThilo said:

    That's the thing my darks do remove amp glow. I showed that in the image on my first post. Adding in the flats caused loads of issues. Maybe I'm not understanding it. The best processing method for me to get the cleanest images is manually in PI with no flats as below

    Darks polluted with light leak will remove amp glow - that is to be expected. In most cases light leak will act as light pollution. Same way you record your target although light pollution has been added - dark will record amp glow. So will light, once you subtract the two - amp glow will cancel out.

    Light leak will not cancel out. That is the issue.

    I don't know what your master dark looks like, but one single dark that you posted has gradient. If I do simple removal of dark from that light without doing anything else (no fiddling around with background subtraction, no flat calibration - nothing, just dark subtraction), I get this:

    image.png.c1c756712c156c421a722b0708456cf1.png

    Now you have gradient to the opposite side than on dark. It is clear that dark subtraction caused gradient from dark to be transferred to flat. Amp glow is gone, and that is ok - both have same amp glow, but dark has this gradient that is not present in light.

    Few things could be happening here that make a difference between my example and one that you gave from PI.

    a)  I'm doing calibration with single dark. Maybe this particular dark is different from other darks for some reason, or maybe each dark is different because of different amount of light leak. You've used average so any differences averaged out, while I used only one that has distinct gradient

    b) You used background wipe in PI and PI removed this gradient in same way it would handle LP gradient.

    In any case dark is what is causing your flat calibration to fail. I can show you with a bit of math what is going on.

    Imagine you have only two pixels rather than whole image (or maybe left and right side of image what ever is easier for you to imagine). One received 70% of light due to vignetting / dust shadow, while other received 90% of light again due to vignetting/shadow/whatever.

    Now imagine that both of these pixels are background pixels that recorded just sky background. I say this because this means they ought to be uniform in intensity - there should not be variation (let's for moment leave LP gradients aside - this is just to understand how dark calibration impacts flat calibration).

    Let's say that background ADU value is 100ADU. This means that first pixel would record 70e and second pixel would record 90e. We need them to have same value in the end if our calibration is good (because we have even sky background).

    These values are just light signal, but light frame also contains bias and dark signal (dark subs contain both as well). Let's say that dark signal is 20e.

    So what we recorded in our light frame would be 90e and 110e (70e+20e, 90e+20e). Now let's do calibration to get even background. Our flat will be 0.7 and 0.9 (because 70% and 90% of light reach sensor).

    Perfect case:

    ((90e, 110e) - (20e, 20e)) / (0.7, 0.9) = (70e, 90e) / (0.7, 0.9) = 70e / 0.7, 90e / 0.9 = 100, 100 - we have equal pixels, or uniform sky brightness. Calibration is good because we had proper dark value.

    Under correction case - dark has larger value than it should (because of light leak, some additional electrons were accumulated):

    ((90e, 110e) - (30e, 30e)) / (0.7, 0.9) = (60e, 80e) / (0.7, 0.9) = 60e/ 0.7, 80e / 0.9 = 85.7143, 88.88888

    We no longer have uniform sky background, calibration failed, and first pixel still has lower value then second although we used proper flat (0.7, 0.9). Because vignetting / dust shadow is still present - we call that under correction - flat did not manage to fully correct image - but not because flat is wrong - it was because dark was wrong - larger than it should be.

    Over correction case - dark has larger value than it should (in reality this rarely happens like that - it happens when lights have light leak and darks have lower value in comparison to lights, but let's do math anyway to show over correction happening):

    ((90e, 110e) - (10e, 10e)) / (0.7, 0.9) = (80e, 100e) / (0.7, 0.9) = 80e/ 0.7, 100e / 0.9 = 114.2857, 111.1111

    Again - no uniform sky background but this time first pixel is brighter than second pixel - "inversion" happened and what was darker in uncalibrated image now is brighter as if flats corrected too much - we call that over correction.

    -----------------------------

    Above was to show that perfect flats can still fail to do flat calibration if there are issues with either lights or darks.

    I believe that both your light and dark subs are polluted with light leak because of:

    1) Dark has gradient. It is very unlikely that dark sub can have such gradient, and also such gradient is missing from light sub (but everything that is in dark should also exist in light - amp glow for example - if it's in the dark sub, it will certainly be in light sub as it is feature of dark signal). This shows that gradient is not feature of dark signal and is in fact "external" (only external signal can really be due to some sort of light / radiation)

    2) You subs when calibrated show Over correction - that can happen if darks are "less strong" then they should be (see above). Since it is highly unlikely that dark current in darks is less strong than in lights (it can be if we have situation where cooling was not set to same temperature - but from what I can tell - it is not case here) then it must be the case where lights are somehow stronger than they should be. This points to light leak again. Lights have some sort of external signal that did not come thru objective of telescope lens. Otherwise it would be corrected by flats because flats describe how such signal behaves (how much it is attenuated).

    Hope this all makes sense.

    1 hour ago, AndyThilo said:

    One other thing i don't understand. Left is PI manual processing without flats. Right is exactly the same but with flats. The right is also the same as I get using BPP. Both are stretched using STF Autostretch. Nothing else.

    You have very red result in your right image because your flat panel is giving off very blue light. It has already been mentioned that red component of flat is very low in value. This can happen due two things. Either camera has very low QE in red part of spectrum (not the case) - or flat source used produces light that has much less of red in it than other two (green and blue). Cool light has this "feature" (with warm light it is opposite - less blue and more red and green).

    Since you have low red signal compared to other two - flat fielding with such flat will produce very strong color dis balance.  Nothing that white balancing can't fix, or flat normalization (process where each of color peaks in your flats is normalized to 1 - that removes any color dis balance that flat panel produces).

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.