Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

vlaiv

Members
  • Posts

    13,106
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. This one is easy Although not quite coherent. If you follow the graph from foot to nautical mile, you will conclude that: nautical mile = 10 cable = 10 x (100 fathom) = 10 x 100 x (2 x yard) = 10 x 100 x 2 x (3 foot) = 10 x 100 x 2 x 3 x foot = 6000 x foot = 6080 x foot (if you go directly) We have 80 feet (11520 poppy seeds) missing in our calculation - or about one shackle
  2. While we are on the subject, can anyone explain this graph:
  3. According to wiki (I had to look it up since this is the first time I saw that English and Imperial units are different thing), American units are further evolution of English units. Therefore - not two steps behind, but two steps behind and one to the side https://en.wikipedia.org/wiki/Comparison_of_the_imperial_and_US_customary_measurement_systems
  4. I regularly bin my color guide camera with OAG since "base" sampling rate is somewhere around 0.48"/px and I don't need that much resolution for my guide system.
  5. Yes, other way around. It really depends on mount used. Higher quality and better performing mounts don't need as frequent corrections. Many people use 1-2s exposures for guide cycle. That is often fast as seeing influence can be quite big on those scales. Better mounts tolerate 4-8s guide cycle, while top level mounts can go over 10s of seconds for single correction - mount simply stays on target that long. Guide exposure depends on several factors - one is seeing, you need long enough exposure to average seeing effects. Other is quality of polar alignment. There is a tool that will calculate DEC drift rate depending on polar alignment error. In most cases, this rate is something like 1-2 arc seconds per minute or less. From that you can calculate guide exposure length - just choose what is maximum offset in DEC you will tolerate for single guide exposure. Similarly, another factor, acting in RA is periodic error. This can be as much as 30 arc seconds peak to peak in one worm cycle or as low as few arc seconds. Depending on how smooth it is (for example, pure sine wave is probably the smoothest form you can have - but it is rarely so), you can again calculate max drift rate in RA. Based on that and wanted max correction - you can calculate guide exposure. Ideally - you want longer guide exposures as that means that seeing effects will be reduced, your polar alignment is good enough and your periodic error is low enough and smooth to be able to use longer guide exposures. Sometimes in the future it can even happen that imaging exposures become shorter than guide exposures ( less read noise and better mounts requiring corrections less often) - but that is easily handled by "summing" (stacking) multiple short imaging exposures to form single longer guide exposure - thus still being able to both guide and image with single camera.
  6. More affordable scopes mean more people can use larger aperture scopes and hence higher magnification? Knowledge on optimization of high power viewing is now readily available online?
  7. It is actually related to read noise of sensor. With modern low read noise sensors, we are approaching moment when no additional guiding system will be required. As has been pointed out, at the moment, there is discrepancy between imaging exposure and guiding exposure - a few magnitudes of a difference. Imaging exposures tend to be in hundreds of seconds while guiding exposures are in seconds. Since difference between long exposure vs short exposure image quality depends only on read noise (or more precisely - it's relation to other noise sources) - we still need to keep our imaging exposures at least few minutes long. With advent of very low read noise sensors - this time will reduce and at some point - exposure lengths will match and then you'll be able to guide on imaging exposure. In fact - something like that is already partially possible in what is called EEVA - short exposure live stacking where exposure lengths are tens of seconds long. I don't think software is yet capable of doing it, but in practice EEVA software would benefit from such guiding as it would be able to dither and improve results.
  8. Some sort of statistical analysis should be considered here. Besides already mentioned variables - observer plays a role as well. Some people tolerate slightly soft but larger image as it allows them to see detail easier. Others prefer detail to be small at edge of resolving but overall image to be sharp. I see this in imaging also. There is not much difference in x2 sampling resolution in terms of what can be seen in the image and perceived sharpness. That is x2 in "magnification". As for myself, I think I went thru different phases. Before - it was about magnification / image size - it allowed me to see better, but as time went on, I found that I prefer lower magnification and perceptually sharper image. Btw, actual magnification that allows you to see all there is to be seen is quite low. For 8" aperture it is less than x100. Everything above that is just magnifying image to make it easier to see without additional detail being revealed.
  9. Hi and welcome to SGL. This is rather interesting question - many are reluctant or can't afford to set aside such a large budget to start with. I think that probably best approach would be: - buy the best mount you can afford - start with a decent starter scope and OSC camera. - maybe throw in guiding as well Decent starter scope and OSC camera won't cost that much (about a third of the budget?) and can be sold easily when you feel ready for upgrade. Good mount will save you trouble at the start and also having to upgrade later. Scope should be up to 600-700mm FL class scope, and camera something like this: https://www.firstlightoptics.com/zwo-cameras/zwo-asi-294mc-pro-usb-30-cooled-colour-camera.html You'll need a laptop with that to capture images. Mount recommendations up until recently would have been iOptron CEM60 (basic version) - but these are no longer being made and are replaced with CEM70 which will be more expensive (not sure if that will be justified or not).
  10. I would avoid x2 barlow with F/10 scope and 2.4um camera - it leads to oversampling, especially in NIR with longer wavelengths.
  11. I have noticed this as well. I think it is more about doing something that makes one happy rather than always focusing on what makes others happy. From time to time, I just start craving that - "this is what I want" fix. It is not necessarily "chase the better" gear kind of thing - it can be "papa's got a brand new toy" thing. It's not impulse buy thing either - a lot of thought and yearning go into it.
  12. Do you have a camera and lens combination that has known QE / attenuation and Gain in terms of e/ADU (although that part you can determine)? Simple incandescent bulb with a diffusing panel could be used than - or flat panel for that matter.
  13. I think that easiest way and probably most precise is to use telescope and camera as calibration source. Procedure is fairly simple - just shoot piece of the sky - preferably away from the milky way with a bright(ish) star in FOV. Star should be bright enough to be in catalog - with known magnitude but not too bright to saturate sensor. Then it is simple matter of doing photometry on that star and measuring background value / dividing with sampling rate so that you get photon count per arc second squared (or just ADU per that unit) - ratio of the two, star ADU count and background ADU count will give you basis for magnitude calculations - take star mag and subtract log of ratio to get sky mag (per arc second squared). Compare that with your device reading in the same region to get one calibration point. Do multiple parts of the sky (different sky brightness values) to get good calibration curve.
  14. I also thought that x2 aperture in mm is some sort of magical number, but it is not. There is actual number based on a bit more science and it will surprise you - for 8" scope it is about x94. Yep, you read it correctly - it is only x94 magnification. Let me explain. Most people with 20/20 vision can resolve 1 arc minute feature (high contrast). Rayleigh criterion (again high contrast) for green light and 200mm aperture is 0.64" - arc seconds. How much do you need to magnify to make something that is 0.64" be 1' large? - that is rather simple 60" / 0.64" = x93.75 - there you go. This basically means - if you use magnification below x94 - you won't see what you can see because you did not magnify enough - you are below resolving power of human eye. Going above x94 - just makes things easier to see. At some point, image just becomes too magnified and blurry - we call that "falling apart" of the image, and it depends on observer, scope and object being observed. I used x500 magnification on my 8" dob and it did not fall apart for me - it was dim and floaters were all over the place, but I could still observe. I prefer now to keep things up to x300 and often at x200. I've got 6.7mm and 11mm 82 degrees ES and 5.5mm 62 degrees ES as higher power eyepieces. I also have x2.7 APM coma correcting barlow - but I noticed that I don't really like to use barlows if I can get wanted magnification with eyepiece alone.
  15. You need to plate solve your image and get coordinates of your star. After that you can search online catalogs like Simbad http://simbad.u-strasbg.fr/simbad/ Alternatively, if you can do a bit of coding, you can use these catalogs: https://archive.stsci.edu/prepds/atlas-refcat2/
  16. New drivers usually require new set of calibration files.
  17. Lovely set of captures. For some reason - bottom of each image seems to have significant blur to it. Do you know the reason?
  18. Yes, of course I just downloaded the first one - it has about 86.5% light level due to vignetting in the corners - so 15% light loss, not that much at all.
  19. That might not be bad as it looks - it really depends on how much you stretch it. Can you post single fits for one filter for inspection so we can see actual values in that flat? For example: These two are the same master flat - but stretched differently - in fact, light loss is only about 20% at extreme corners.
  20. I see similar optical aberrations in all stars in the image per channel. Red is most round. Green shows sagittal astigmatism. Blue shows inward coma like tail. I'm not sure if that is aberration or stacking artifact - maybe stars moved due to atmospheric influence between first and last frame and that tail is sigma clip artifact - or it could be genuine thing. M106 Luminance top right corner: Top left corner: Bottom left corner: Bottom right corner: Same as green in above animation that I made - only significant thing that I see is sagittal astigmatism
  21. Not sure if it does - since all stars show in each channel - I would expect that they have enough signal in each band.
  22. In the meantime ... Ken why don't you try Deep Sky Stacker to see if it will do the same? As far as I know - it has rather "bendy" alignment model (sometimes it can really warp an image if it gets the stars wrong - so I guess it will try its best to compensate for any distortion made by lens or atmosphere).
  23. You have to take into account that same algorithm stacked all three channels into their respective stacks - and therefore aligned subs for each color while stacking. Now - each sub was not taken at the same time and atmospheric impact could be possible between first and last sub. Look at this crop and upsample (nearest neighbor - we can see pixel squares) and animated gif made by "blinking" three channels: We can see distortion that you are talking about - all stars change shape slightly - being round or elongated, but large stars change position, and smaller stars stay in the same place??? How can that be if it is down to optics?
  24. It is fairly simple explanation really and one worth understanding. Noise adds like linearly independent vectors - which really means square root of sum of squares (like finding length of a vector using projection of it in X and Y coordinates - here X and Y are linearly independent - you can't specify X with Y and vice verse). It is also the same as finding hypotenuse in triangle. Here image will help: Longer one side is with respect to the other - smaller the difference between hypotenuse and longer side. It is up to you to define what larger enough means - and based on that you can define how much background signal you want. In this particular case we are putting read noise against LP noise - as two dominant noise sources, or rather - we are letting LP noise become dominant noise source with respect to everything else and observing how much larger it is with respect to read noise. ASI1600 has read noise of about 1.7e at unity gain. My rule is - make LP noise x5 larger than read noise. It is very "strict" rule - you can choose x3 rule which is more relaxed - but let's look how each behave: With x5 rule - we have 1.7e read noise and 8.5e LP noise. LP signal is going to be square root of LP noise, so LP signal is now 72.85e. Since we are at unity gain, but ASI1600 uses 12 bits - we need to multiply that with 16 to get DN - that gives us 1156DN But let's see in percentage how much read noise + LP noise is larger than LP noise alone: sqrt(1.7^2 + (5*1.7)^2) = 1.7 * sqrt( 1+ 5^2) = 1.7 * sqrt(26) = ~ 1.7 * 5.09902 Difference being 5.09902 / 5 = 1.0198039027185569660056448218046, or about 1.98 = ~2% Only two percent of difference between sky noise and sky noise + read noise - hence read noise makes minimal impact Let's do the math for x3 rule: again we have sqrt(1+9) = sqrt(10) = 3.1622776601683793319988935444327 / 3 = 1.0540925533894597773329645148109 = ~5.41% - now we have 5.41% difference between pure LP noise and LP + read noise. Btw x3 rule will give us: (1.7*3)^2 * 16 = ~416DN That is the point behind any particular number - how much difference you want for read noise to make. I usually use 2% difference rule - but you can say, I'm happy with 10% difference - and I'll use rather short exposures.
  25. Do you have any idea how they arrived to that number? It is better to understand why particular number is chosen than to just use number. Not really - that is not over exposure at all. You are right - you have shown areas of the image that are over exposed by using a bit of pixel math - mostly stars. You are also right that it will lead to color distortion in the stars that are saturated. There is very simple procedure to remedy that - one should take just a few short exposures at the end of the session to use as "filler" subs - for those over exposed stars. Since we are dealing with limited full well capacity in any case - at any gain - there will always be some stars that saturate our sensor for given exposure - it is therefore better to just adopt imaging style that overcomes that for any case - few short exposures. Is 5000DN background a bad thing? No, it is not - if subs are ok at that sub duration - overall SNR will improve for same imaging time over short subs. It will be very small improvement over recommended value (I would say 1156DN as a "better" number), but there would still be some improvement. Only when read noise is 0 there is no difference.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.