Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

Pitch Black Skies

Members
  • Posts

    716
  • Joined

  • Last visited

Posts posted by Pitch Black Skies

  1. 8 hours ago, vlaiv said:

    Seems that many people rely on this tool and given that its hosted / maintained by @FLO

    I think it would be wise to revisit validity of the information presented by it.

    I've been several times involved in discussions where people refer to above tool and either accept very flawed advice offered by the tool, or question otherwise sound setup as it is not in the "green zone" according to the tool.

    There are also several statements made on that page that are simply false and should be corrected.

     

    I used the tool to help choose my current camera.

    What are the false statements and flawed advice that needs to be corrected?

    I'm just a newcomer so would like to be more aware for going forward.

    Thanks

    • Like 1
  2. 23 hours ago, vlaiv said:

    No more brightness in the corners! After removing dark signal we have proper flattening of illumination.

    You're right, this is exactly what was happening!

    Foolish me for thinking I didn't need to use darks, lesson learned.

    How you explained it with the maths made it logical and very easy to grasp, thanks again.

    I'll upload an image calibrated with just flats and dark flats, and then the same image with darks included to show the difference it made 

  3. 1 hour ago, vlaiv said:

    Ok, so here is a quick break down of terms and what they represent and how to calibrate in different circumstances.

    We have dark signal and dark noise and bias signal and bias noise (which is always referred to as read noise).

    Bias signal and read noise are not related in any obvious way. Signal is the bit you want to remove by calibration, while noise bit is reduced by stacking (Signal to noise ratio is improved). In order for stacking to work noise needs to be truly random. Bias signal is just "offset" added to pixel values - which is not the same for every pixel but in general it is pretty uniform as far as value goes - in modern CMOS cameras you can set this overall level by using offset parameter.

    Dark signal and dark signal noise are completely related. Dark signal is buildup of electrons due to thermal fluctuations in electronics. Dark signal noise is just randomness in this build up - similar to shot noise associated with light signal.

    There is strong relationship where dark signal noise has magnitude that is exactly square root of dark signal (expressed in electrons - in ADUs this does not hold).

    When you shoot bias exposure - it only contains that bias signal (and read noise, but we don't care about the noise bit here).

    When you shoot dark exposure - it contains both bias signal and dark current signal

    When you shoot your regular light exposure - it contains bias, dark and light signal.

    Point of calibration is to remove all signals and only leave light signal (light gathered by telescope - you don't care about thermal properties of camera or offset added - and you don't want them as they mess things up).

    You can remove dark from your lights and that removes both dark current signal and bias signal as darks contain both.

    You can use both bias and darks when calibrating your lights - but there is really no point in doing so - as darks remove both (using bias in addition to darks won't mess things up as algorithm produces correct results). There is one special case where you can and need to use bias and that is in the case of dark scaling - which in general you should not do unless you know what you are doing (both knowing what you are doing and being sure your camera is capable of that). That is when you use different exposure time for darks and lights and you want to compensate by scaling darks.

    You can use bias only for calibrating lights - but that is not proper way to do things. It will work under two distinct cases:

    - using DSLR that internally subtracts dark current for you - and all that is left is bias. This is actually a good thing - newer DSLR cameras have some clever ways to measure dark current while exposing so dark is taken at the same temperature as light.

    - your camera has exceptionally low dark current at temperature used and your exposure is short enough that dark current is virtually 0 for duration of exposure. This is something that you really need to check for your setup as two same cameras can behave differently depending on scope and light pollution levels.

    For example ASI533 has 0.00013e/s/px - which is very low dark current. But if you expose for say 5 minutes, total value of dark current will be 0.039e. Now that might seem very low dark current - and in principle it is. It is only very small percent of background signal in most cases, but what if you use high resolution and you shoot in very dark skies and your background signal in exposure is something like 0.1e?

    Then this dark current is no longer negligible small compared to background signal and you will still see over correction in the corners (if you have strong vignetting).

     

     

     

    Thank you so much for that detailed response. I will definitely be taking some notes from this for going forward.

  4. 5 hours ago, Laurin Dave said:

    Darks may not be needed with the ASI553 (because its Dark current is so low) but Bias certainly is ...   

    I'm not sure, I've heard of people saying Bias are calibration frames that should be skipped. Some have said they can actually make things worse.

  5. 5 hours ago, vlaiv said:

    Why do you say that?

    That is precisely why you have bright corners.

    Dark frames don't remove noise. No calibration frames remove noise. They all remove / correct some sort of signal.

    Dark frames remove dark current signal.

    Imagine following scenario.

    You have illumination of 80% in corners due vignetting and 100% in center of frame.

    Your light sub gathers 100 electrons over whole field. Your dark current is 10e.

    In center you will therefore have 100 electrons (no vignetting) and 10e from dark current - so that is 110e

    In corner you will have 80 electrons from light (80% illumination) and 10e from dark current so that is 90e total

    You divide that with flat frame which is 1 for center and 0.8 for corners

    110 / 1 = 110

    90 / 0.8 = 112.5

    What just happened? How come that corner is brighter than center? All we did was to apply correct flat frame?

    Look now what happens when you remove dark:

    (110 - 10) / 1 = 100

    (90 -10) / 0.8 = 100

    No more brightness in the corners! After removing dark signal we have proper flattening of illumination.

     

    Awesome! I have got to try this soon!

    I was under the illusion that Darks weren't needed as the camera doesn't suffer from amp glow, and it's read noise is supposedly very low.

    Maybe it's bias that take care of read noise/signal? And that's why bias aren't needed?

    So is it incorrect to say dark noise then? It should be dark current signal?

    I have the recommended beginners book 'Every Photon Counts', but would like something to help me progress a bit further. Is there anything you could recommend?

  6. 4 hours ago, carastro said:

    I always use around 23,000 ADU but definitely no higher than 30,000.  40,000 sound too high and I wonder if the flats are over compensating.  If the flats are not showing any dust when you take them, the chances are it's too high.   I am not familiar with your camera (still iusing CCD), but 2 - 4secs also sounds too long, unless you have dimmed the light right down.  Mine are usually less than a sec.  

    Carole 

    I can see one dust mote. They were at around 23, 000 and exactly 2 secs.

  7. 2 hours ago, Laurin Dave said:

    From what you've said it sounds like you're not using Darks and Flat Darks to calibrate your lights and flats, which would account for the bright corners dark middle, and which you need to use to calibrate correctly

    I don't use Darks, they aren't needed with the 533.

    I'm using Dark Flats.

    How would not using them account for the bright corners? They are only removing noise correct?

  8. 46 minutes ago, michael8554 said:

    Yes, in what way aren't they working ?

    RGB values seem okay, about 50%.

    Camera is well off centre.

    Michael

     

    When integrated into the lights, the centre looks darker than the four corners.

    What would cause the off centre issue?

    I think the focuser thumbscrew lock is pushing the focuser drawtube off axis. Would that cause it? What implecations would this have on the lights, and should I continue to calibrate with it like this as the flats should match the lights orientation?

  9. 1 hour ago, Elp said:

    I think reading up on bits, bytes, binary etc will make for some interesting reading for you :). Really you don't need to understand it in great detail, I don't myself. In essence the higher the bit depth the greater range of colour/shades can be captured/displayed/defined. Generally yes, capture at 16 bit, you can always change down to 8 bit when post processing.

     

    Sounds exhilarating 😅. Seriously though if know any good astrophotography beginners books you can recommend I will check them out. I have Every Photon Counts. It's really good but like any book, it doesn't cover everything.

  10. 4 hours ago, Elp said:

    The 2 is essentially how many binary combinations are there to represent pixel info. Power of 1 gives 2 possible outcomes either a 1 or 0. Higher powers it becomes exponential as to the amount of binary combinations therefore there is more bit depth or range of colours which can be represented. If you look at a comparison chart of white to black represented at different bit depths on a well calibrated monitor you'll visually see minimal banding/difference betweens shades of grey at 8 bit or more. In photography the general consensus is to shoot at a reasonable bit depth so the pixel data is retained and dynamic range is not lost.

    RGB is generally processed at 8 bit or 16 bit. HDR images at 32 bit but 32 bit processing options in software is usually limited.

    Fit is an image format which stores detailed pixel data, so detailed you can probably setup your next imaging session based on the data it stores and there isn't really a limit to how much data they can store. Astro cameras tend to have this option by default. A lot of software however cannot open or process fits so you'll need something like Fits Liberator to open and level stretch each image one at a time (it's why I recommend tif because at least you can use Windows default image viewer to scroll through your images).

    Png is another image format which can save quality files with minimal compression compared to jpg (which is the last format you want to use for astro images, okay for saving for uploading to internet or if you want to keep your files sizes reasonable).

     

    I see, and is 1 on and 2 off? Or would 1 be one shade of colour and 2 another shade of that colour?

    With 16 bit having the broader dynamic range, would it be superior to use that always instead of 8 bit?

  11. 5 hours ago, Rallemikken said:

    When I do this, I line them up as a whole. There will be differences in height and shape of each column. Very rare to line them  up perfectly.  If you  have prewiev of both the image and the histogram updates itself as you drag the sliders,  its just a matter of taste. 

    In Gimp I use the Color -> Levels for the one overweigt channel to do this, usually I just move the mid point. I also have had some luck with the Color -> Components -> Channel Mixer.

    Cheers, I've lining them up as a whole like you said. Definitely looks better.

  12. There is a green cast to my images. I believe it's because there is an extra green in the Bayer matrix (RGGB).

    Is the idea to always line up the separate RGB channels at the start of  the histogram in the area marked to achieve a true colour balance?

    IMG_20220117_081537.thumb.jpg.ec00d227cdef7e0dee1641345df43cf8.jpg

     

     

    This is as close as I can get. The lines will not come any closer together for me.

    IMG_20220117_081635.thumb.jpg.a4d4ba2be6d0d11ef52f0e42b4c0cb7f.jpg

     

    I'm using Adobe Lightroom on my phone. Camera is ASI533MC Pro with UV/IR cut filter.

  13. 6 hours ago, Elp said:

    RAW16 is fine, its 16 bits colour depth which is fine. If you eventually downsample to jpg you will be at 8 bit. The higher the bit number the more colour/pixel intensity value is stored within each pixel which means more dynamic/colour range by a factor of 2 to power of (n) bits in general terms. TIFF is also fine, as Onikkinen has said SER will contain it into one video. I personally always prefer images as they will remain uncompressed and you can view each frame easily if you want, if the capture crashes midway you still have any images you captured already, video may corrupt.

    Probably a stupid question but why is it 2^n? What does 2 represent? Should it not be 3, as in RGB, or 4 as in RGGB?

    There is also an option for a FIT and PNG output. What do they mean?

    Oh right, so it can be done by individual frames rather than video. I was thinking it could as it is essentially one less task for AutoStakkert.

  14. Thanks vlaiv.

    Is the difference negligible?

    I initially had a 120MM Mini for my guidecam but exchanged it for the 224MC.

    I was under the impression it could double as a reasonably good planetary cam as well as being my guide cam.

    But maybe it was a pointless upgrade as the 533 can work just as well?

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.