Jump to content

vlaiv

Members
  • Posts

    13,265
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. I would not dismiss that scope so easily. It is capable of showing quite a lot. Yes, mount is really poor - but you do have a choice there - either get decent mount like AZ4 - or if you have some DIY skills - make yourself a dob mount for it?
  2. That is very unlikely. TS in Germany lists this scope to be around 4kg (total weight being 11Kg and something like Eq2 mount has 7Kg so that makes sense). Also SW 130/900 newtoninan is listed at 4.2Kg - so it makes sense that 114mm version is going to be lighter than that.
  3. Probably this in that price range: https://www.firstlightoptics.com/alt-azimuth-astronomy-mounts/skywatcher-az4-alt-az-mount.html It is rock stable for shorter scopes. Not sure how it will fare with 900mm long tube - but it can't be worse than eq1/eq2 class mount. Only drawback is that this mount is not ideal for watching things near zenith (tube can hit tripod legs) with longer scopes, but if you are interested in planets and the moon mostly - this won't be an issue.
  4. It's not a good thing if you want to exploit full - full well capacity (for any reason) that is available to you. If you have 12bit camera and lowest gain that you can set with HCG mode has e/ADU of say 1.1 - max number of recorded electrons will be 4096 * 1.1 = ~4500. It does not matter if your camera has 14K FWC - that won't be accessible to you. While I think that FWC is not important in stacking - there are certainly cases where one might want to have large FWC - like if stacking is not available due to transient nature of phenomena.
  5. We already have measure of strongest signal - that is full well capacity. Weakest signal that can be recorded - depends on host of things. Read noise is probably one with the least impact. For astrophotography, such measure is completely useless. You can control complete process with: a) exposure length b) total integration time Say you have 6K full well capacity. Want to record source that emits 6K per minute - but you want to avoid saturation? Expose for less than a minute (say half a minute). Want to record large maximum FWC of say 300K - add up 100 of such subs and you will accumulate 300K of signal (if each sub is 3k max). In any case - stacking solves issue of weakest signal. Want to record something really weak signal? Stack enough subs to push its SNR above detection threshold. They are showing some sort of "read mode 0" and "read mode 1". QHY often just numbers read out modes without all that LCG/HCG or whatever they are called. I've seen some of their camera models that have more than 2 readout modes - as much as 4 or maybe 5. I'm not sure it is. For long time 16 bit ADC was the norm and people valued large FWC. When I started using ASI1600 - I happily accepted 12bit ADC and 4000 FWC. By control of exposure length and number of exposures - any source can be recorded. 60K FWC is not much of advantage really over say 4K. One might argue that with large FWC - stars won't saturate. My counter argument is that - for any FWC there is enough of difference in star brightness that some stars will saturate and best way to deal with that is not to increase FWC but rather to use exposures of different lengths based on what one images. Even using multiple exposure lengths of same field and then combining signal is going to work well - and in fact, it will solve issue of saturated stars - regardless of FWC and how bright star might end up in FOV. It just works for every scenario.
  6. DR as defined is just ratio of highest signal recorded to read noise. As ratio of two numbers - you can make it big by making numerator bigger or reducing denominator. In case of HGC (if that is low read noise version) - we reduce denominator. However - DR is simply useless as measure of anything really, so not sure why it is used. I'd be much obliged if anyone explained usefulness of DR to me, maybe I'm missing something.
  7. To be honest, I don't really get all LCG, HCG (don't know which is which) thing, but I'll try to answer original question based on some QHY cameras which have both modes on. First lets address some important things. Dynamic range. This term is in my view completely useless as it is used. What does it represent? Maximum signal value divided with read noise? Why is that important? If we want to address daytime or night time surveillance use cases - why is read noise important bit? What would be more sensible thing to be called dynamic range? I'd say ratio of highest and lowest signals that can be distinguished. For low signal to be distinguished we need something like SNR5 or some other measure - but whatever measure we introduce, shot noise will be dominant and hence actual strength of that signal will play important part - not read noise. In above case if we have FWC of 12000 and FWC of 4000 - it is clear which one will have larger dynamic range even if read noises are different. If we adopt SNR5 as standard and have read noise 3 and 1 - we will have ~27.71e and ~25.96e as signals, so dynamic range will be 12000/27.71 = 433 and in other case it will be 4000/25.96 = 154 In astrophotograpy - dynamic range completely does not make sense as in either of above definitions as we stack to improve SNR so final SNR and any sort of dynamic range will depend on total integration time and number of stacked subs. Now, why have two modes? Well, if we have only a single mode - like green line above, then what is the point in having 12K FWC when we can only utilize something like 4500 with 12bit ADC. Or to phrase it differently - why add only 12bit ADC when 14bit ADC will solve the issue of FWC. So we have compromise in terms of price / complexity of sensor. Second thing is having low read noise at low gain setting. At low gain setting - system gain is ~3 in purple readout mode. With this mode we are reading complete full well of 12K and putting it into 0-4095 range of 12bits. If we observe what will happen with very low signal: 0,1,2 electrons will be read out as 0ADU 3,4,5 electrons will be read out as 1ADU and so on Signal will have certain level of "posterization" to values. Just take ADU and convert it back to electrons - 0ADU will give 0e, 1ADU will give 3e, 2ADU will give 6e and so on. We have introduced this "stepping" error which is very predictable and not random at all. Stacking won't help get rid of such error. This error must be mixed with another error to make it more natural. This is where read noise steps in and makes things fuzzy enough. This is called noise shaping: https://en.wikipedia.org/wiki/Noise_shaping 1e of read noise is not enough to shape 3e of posterization and read noise needs to be higher than that. This is why we have higher read noise on higher e/ADU values (among other things - it is handy for this purpose). In the end - to address your question, why would anyone use purple read out mode over green (even in astronomy) - well if one wants to simultaneously detect two signals that have large intensity difference - that wouldn't otherwise fit in green readout mode and thing being recorded is transient so we can't use stacking of some sorts - but only single exposure. In general, in astrophotography - not very useful, so just use green mode - or do what @wimvb does and use full well capacity (just mind exposure length as you are then in high read noise domain).
  8. He seems to be using Canon DSLR in that video? Both Canon EF and Canon EF-S lens mounts have 44mm flange focal distance and can easily adopt M42 lens. I used Helios 58mm F/2 lens on my Canon like that. This adapter solves it for example: https://www.bhphotovideo.com/c/product/843885-REG/vello_la_cef_m42_lens_mount_adapter.html (note absence of any optical elements because light path does not need to be extended).
  9. How did you figure that out? From what I can see for Altair camera, there is significant drop in FWC: On a separate note, when low gain is applied, following happens: - one utilizes full well capacity to the maximum (actual full well capacity - not value clamped by ADC). That can be beneficial in some applications - If ADC bit count is lower than needed - there will be quantization error involved. Certain level of read noise is needed to perform dither on the data and make that quantization error less noticeable / looking more like regular noise. For that reason it is ok and even necessary to have higher read noise levels on lower gain settings ("above" unity gain).
  10. 1) not possible without special optical adapter. M42 lens have 45.46mm flange focal distance while Nikon F-mount has 46mm, so you are 0.5mm short. You won't be able to focus properly at infinity. When lens has longer flange focal distance than camera - solution is to add spacers, but when it has shorter flange focal distance - one can only use special adapters with optical elements that move focal plane of the lens further out. Something like this: https://www.bhphotovideo.com/c/product/995104-REG/fotodiox_m42_nk_pg_pro_nikon_f_mount_lens.html That one uses x1.4 corrector lens embedded - and thus changes focal length of the lens as well. Btw - you would need fully manual lens as these sort of adapters don't have electrical connections so no auto focus or automatic aperture control 2) No way, or at least no sensible way. You would need M42/T2 adapter and then T2 to 1.25" adapter. These all need to have shorter optical path than said 45.5 mm and then you stand a chance focusing with lens focusing mechanism. If total path is longer - you won't be able to focus at infinity. It also depends where focal point of 1.25" eyepiece is. There is no standard position, so some eyepieces will work and some might not. Your best bet is to try any particular eyepiece by holding it against the lens and seeing if you can get the focus by moving eyepiece slightly back and forth (requires steady hands) 3) probably not for the same reasons as above - issue with mounting and distance 4) Dedicated astronomy cameras are easiest to attach to a lens. You only need M42/T2 adapter and some spacers. Astronomy cameras have small back focus so there is plenty of "room" in those 45.5mm for adapters / spacers. Having said that - SV305 is probably the worst choice of camera to pair with such lens. It is very small sensor with small pixels. Small sensor + longer FL of lens means: too narrow field of view. Small pixels with any kind of lens - means blurry image. Lenses are not optimized to work with small pixels and provide sharp stellar images. Your best bet is to bin data to get larger pixels and then you loose pixel count. You will end up with something like 640x480 image - covering small portion of the target.
  11. 1) That is preferred way of using a barlow on a scope. You don't need field flattener as central part of the field is usually quite flat. It is the corners that have issues. Barlow magnifies the image and you end up with only small portion of central field on the sensor with it - so there is really no need for flattener as that part is really flat. 2) 2" barlow is probably overkill for 99% of use cases - just use normal 1.25" barlow. 3) You can either use 1.25" nose piece that you will insert into barlow, or you can use T2 adapters and spacers if you happen to have barlow with detachable barlow element. Second option is preferred way of using barlow as you can adjust barlow to sensor distance. Barlow element magnification varies with its distance to focal plane - and in case of imaging this means barlow / sensor distance. If you make this distance smaller - you will get less magnification from barlow, if you increase this distance - magnification will increase. There is only one position where barlow works "as prescribed" (like x2 barlow). Ideally - you want to dial in wanted magnification factor (does not need to be x2) depending on pixel size. If you want to do planetary imaging (solar/lunar included) - and want to experiment with lucky imaging, then there is simple formula to follow: needed F/ratio of your setup needs to be 4 x pixel size (unless you are using narrowband filters - then there is different formula). This means that for ASI533 and its 3.75um pixel size - you need F/15 as optimum F/ratio. While for ASI178 and its 2.4um pixel size - you only need F/9.6 Your scope is F/5.9 so you'll need ~ x2.5 barlow for ASI533 and x1.5 barlow for ASI178. This is why using barlow element and variable distance helps - you can dial in needed F/ratio 4) Sure you can use ASI178 for lunar, but you won't need to bin x2. If you are planning on using some sort of NB filter to help with seeing (you can try Ha, or SII or even OIII, or something like Baader Solar Continuum filter), then you need to use slightly different formula for F/ratio. F/ratio = pixel_size * 2 / central wavelength Where central wavelength is central wavelength of used filter in microns (same units as pixel size). If you want to use Ha NB filter than central wavelength will be 0.656um. For OIII it will be 0.5um and for Baader Solar Continuum it will be 0.54 This does not make big difference, but there is some difference Ha F/ratio for ASI178 is F/7.3 OIII F/ratio for ASI178 is F/9.6 Solar continuum F/ratio for ASI178 is F/8.9 Just a note - longer the wavelength, less detail will be "available" (this is represented as lower F/ratio needed) - but atmosphere will be more "stable" (atmospheric dispersion also depends on wavelength and longer wavelengths are less distorted / less bent), so it is a compromise. Most people use Ha filter, but I think that Solar continuum should give best results for both solar (white light) and lunar.
  12. That depends on magnification used in each scope. If you match magnification to aperture - then it will look the same.
  13. There is no hard limit to what we can detect - no red line which represents "above" and "below". https://en.wikipedia.org/wiki/Just-noticeable_difference With visual astronomy, you will often read reports of target appearing and fading out of view. It will become stably visible after certain amount of time - as we build up mental image of it, sort of train our brain to see it. This is combination of two effect - JND and the fact that we can "burn" certain image in our brain. Further more, besides JND - there is issue of sampling, particle nature of light and the way our brain interprets things. We don't see continuous signal although it might seem so to us. We see in pixels - or rather special neurons that are evolved to sense light. These are finite in size and are arranged in irregular grid. Particle nature of light means that light is not continuous signal, but rather arrives in photons and thus have associated Poisson noise. We can detect very faint signals - just 7-9 photons strong. At that photon rate - noise will be very big, however we never seem to see such noise. This is because of how our brain works in order for us to see something faint. Several criteria must be fulfilled for sensation to happen. Few adjacent photo receptors must be triggered, and then brain decides on some threshold to produce actual signal - but denoised. Then there is matter of magnification. You need to increase magnification in order to start resolving Airy disk of source star. To a certain point - star is just point source, and two point sources look the same to our eye - or rather have same shape on our retina that depends on "optics" of our eye. Eye lens distorts point of light just enough so it covers few receptor cells in order to be detectable - and we see it as point because our brain filters things. When we start increasing magnification - we reach place where airy disk is no longer point source and is resolved. This spreads the light over more surface and reduces photon count. If star is already at threshold visibility - further increase of magnification will push it below that threshold (here threshold again is not clean line but about +/- 7% of intensity and depends on how long you stare at the thing). All of this shows that you can't explain things with such a rather simple model as constant cutoff point.
  14. Except for the fact that our vision does not work like that. There is no hard cutoff and our perception is not linear so you can't equate linear intensity with our perception. What you have shown in that graph is more related image processing and setting of the white point and bloated stars then to visual side of things. As for SCTs being mushy - there are really two explanations: 1. collimation / thermal issues 2. Spherical aberration First is self explanatory, and second depends both on quality of the optics - but also on primary to secondary distance. However, most SCT designs focus by changing distance of primary to secondary, and what sort of accessories are used will determine amount of in/out focus required. Same SCT scope can become mushier if longer optical path is used - like binoviewers or different type of diagonal then what the scope was designed for. Even barlow lens will alter focus position. SCTs have spherical primary mirror and have rather large corrector plate in the front. If that corrector plate is not figured properly - there will be residual spherical aberration. Add to that substantial central obstruction and the fact that it is F/10 scope with long focal length that makes high powers very accessible - and mushiness best shows at high powers. Sometimes it is hard to distinguish real source of mushiness (even seeing can be part of it) - but mushiness itself is easy to identify - hence numerous accounts of it.
  15. That is just artistic license There is a lot of SNR in that image. You could really sharpen it up without introducing too much noise. Maybe even consider doing starless processing. Separate nebulosity and stars with starnet++ and process separately. Image would benefit from taming the stars. It is busy star field and since image goes deep - there is plenty of stars. When stretching to show faint nebulosity - stars also get stretched and show much more - that makes image even "busier".
  16. Can you run the math by me on that? What numbers did you use for bin 1 and bin 2?
  17. Not sure what you've noticed, but no - binning just means summing up photons. You can't suddenly get more photons. Doing it at capture time or doing it afterwards - result is the same. Binned pixel photons = sum of individual pixels that are binned. QE does not improve. I prefer to capture at native resolution and then use any binning afterwards, and yes - I can decide on bin factor based on achieved resolution. For my gear it is usually x3, rarely x2 and sometimes x4 Another benefit is that you can choose how you want to bin your data. You can bin it normally, but you don't have to. Fact that we are stacking multiple subs gives us opportunity to do some fancy stuff. We can do split bin instead of regular bin. That just means that we split pixels into separate subs instead of adding them up (they will be added / averaged in stacking anyway). This way we create 4 subs for each sub taken - and we end up with x4 more subs (as if we imaged for x4 longer - again it is all the same stuff really ) - this helps with something called pixel blur - but that is very advanced to explain and not very noticeable in everyday use, so you don't have to worry about it. In any case - yes, only advantage to capture time binning (for cmos sensors) is smaller file size and faster downloads (but cmos download subs in less than a second anyway).
  18. Sure, there are several types - but none of them works 100% Cheapest is Wratten 8# yellow filter. It gives distinct yellow cast - nothing you can do about it for visual, but you can apply color correction to an image (like white balance thing) to get better image. There are Baader semi apo, fringe killer and contrast booster. Each of them reduces violet bloat to some extent - both visually and for imaging. You can find reviews for each of them online - and even some comparisons - like this one: Then there is aperture mask. Aperture mask works very well for planetary observation. It works well for DSO and astrophotography - but it cuts too much light, so unless you know what you are doing - stick to planetary with aperture mask use. Aperture mask is simply a mask with smaller circular aperture in the center. It is restricting light to only central part of lens and blocks outer regions that bend the light the most and cause the most issues with chromatic aberration. You have 150mm/1200mm FL scope Here is good guide for chromatic aberration: You want light green for visual and dark green for astrophotography to have minimal levels of CA. Your scope has CA index of 1.33 - which is rather poor. If you make aperture mask of 100mm - that will give you 4" F/12 scope (100mm and 1200mm FL so 100/1200 = F/12) That is already at CA index of 3 - or in the light green. With 80 aperture mask - you'll have ~3" scope and F/15 - which gives you CA index of 5 - that is deep green. You can even combine things - filter + aperture mask. I made following image using very fast F/5 102mm achromatic scope (loads of color), wratten #8 filter and aperture mask: Some stars show very slight sign of violet edge - but overall, not many people would think it was captured with F/5 achromatic scope.
  19. This is quite normal and to be expected. You are using achromatic refractor telescope. It can only bring to focus two wavelengths of color - others will be out of focus. This usually results in violet "halo" around objects. This is why such scope is not recommended for astrophotography (camera is much more sensitive, and while you can see it visually in high magnification - it usually is not an issue in low to medium powers, but it will always be issue in photos).
  20. I think that you probably should not read it like that. What it should really say is either: - collects x4 more photons per pixel for same exposure length (either sub or total - for both is true) or - needs x4 less time to achieve the same target SNR It is formula for "speed" and not how much signal is accumulated over time. It just happens that "speed" is related to signal per pixel (not overall / total signal - that depends on aperture size alone). Nothing really magical is happening here. It is all very rational if we use right words to explain what is going on. If we bin 2x2 - we will get x4 larger pixel - so yes, it is completely rational that x4 larger pixel by surface collects x4 more light if aperture stays the same and total number of photons that hits the sensor does not change. Larger pixel will simply scoop larger part of those photons.
  21. You are almost there Number of captured photons is very tightly related to achieved SNR. Binning at capture time - does not make you capture more photons. You would need larger aperture for that. What it does (same as larger pixel) - is to rearrange how those captured photons are distributed - how many of them is "in each pixel". However - that does not change if you sum them after measuring or before measuring. Imagine you have four buckets containing golf balls. In one case - you take your four buckets and pour all of them in one large bucket and then count golf balls (binning at capture time) In other case - you count balls in each bucket and then add them up on paper. Are you going to end up with different result? No - you'll count the same total number of golf balls. Way you count them is irrelevant. Here is comes important bit. I mentioned that number of photons is related to SNR. Larger number of photons - larger the SNR and there is mathematical relationship. It is exactly the same if you do the following: capture 100 photons in single pixel (large pixel) capture 25 photons in each of 4 pixels and then "pour" them in single bucket and count them -again 100 photons - same SNR capture 25 photons in each of 4 pixels and them mathematically add those numbers - again 100 photons and same SNR expose for x4 longer with single small pixel - you capture 100 photons - again same SNR capture 25 photons in each of 4 smaller pixels and mathematically average those - here you will have resulting number be 25 but that is because you only changed "unit" - you have 24 "4photons" - and same SNR. (SNR does not change if you divide both signal and noise with 4 when you average 4 samples - divide both numerator and denominator with same number - fraction remains the same). That is why stacking works - it is like adding exposures together to get the same number of photons as would be captured with one single long exposure. (above is strictly true if there are no other noise sources except shot noise, but SNR part remains the same - stacking won't be the same as one long exposure when read noise is non zero, but that is another topic).
  22. Binned data can have same brightness as not binned data, or it can be brighter than non binned data - depends on how you do the binning. If you use average - it will keep the same brightness. If you use sum - it will be "brighter". It will however improve SNR in both case. Remember - brightness is just something you assign to numerical value. You can say my image has pixel value of 100 units, or you can say - my image has pixel value of 1unit per second, or you can say - my image has value of 60 units per minute. Those are all same thing even if they have different numerical value assigned to it. Similarly - by adjusting white and black point - you say, this 60 units per minute will be brightest part of the image. Since you can't separate noise and actual pixel values - if they are close and you make signal bright, you will make noise bright as well and it will be seen in the image. This is why signal to noise ratio is most important thing. It let's you set signal brightness without making noise bright enough to be seen. Hope that makes sense.
  23. It is, but with cmos sensors - it does not matter if you bin it during capture or later in processing - result is the same. I know this might be a bit counterintuitive - but if we put it in right context - it will make sense. Binning is nothing more than stacking applied "horizontally". With regular stacking - we average a number of captured subs - meaning pixels in each sub at certain coordinates - say 201, 136 - get averaged out between subs. Key here is average (or sum) of pixel values. Same thing happens with binning, except pixels are not in different subs at same position - they are in same sub at different positions. We average / sum groups of pixels. This of course has drawback of doing something to resolution / total number of pixels - but if you are over sampling or generally don't care about finest detail - it will perfectly fine. Stacking is done after capture, not during. Similarly - binning can be done after capture and it will produce same effect. You can reduce your imaging time if you plan to bin in software later. Better yet - look at it this way, you image for the time you have at your disposal, and binning will just improve SNR that you are able to achieve in given time. If you don't bin in software and leave it at bin1 - it will have corresponding SNR, but if you bin x2 - you will improve SNR by factor of x2. If you bin x3 - you will improve SNR by factor of 3.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.