Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

vlaiv

Members
  • Posts

    13,026
  • Joined

  • Last visited

  • Days Won

    11

Everything posted by vlaiv

  1. Stellarium shows very nice iridium flare from my location as a mark to start observing and it indeed looks lovely in my new Mak102 with ES62 5.5mm
  2. Fast download and power could be cause of issues with CCD sensor - above is CMOS and should not be susceptible to either. It is designed to give really fast download rates - like 30fps, and it does not have separate ADC that can suffer from power fluctuations - each pixel has its own ADC unit.
  3. Looks like offset issue to me. Can you take one of your darks and look at its histogram? There might be clipping to the left. If there is, or if histogram is too close to minimum value - can you increase offset in your camera driver options?
  4. I seem to understand this topic a bit (not this actual topic - but deconvolution and psfs and image restoration) and I can tell you that you are on to something. Not sure what your actual approach is, but yes: 1. you can extract PSF from image and stack of images will improve SNR in PSF 2. you can use PSF to reduce the noise 3. noise reduction and sharpening can be part of single filter Let's briefly address these points: 1. PSF - not sure if you should attempt to use extracted PSF for this application, or just assume certain analytical form of PSF - like gaussian. Actual PSF of star will depend on: - position in the field (take newtonian scope as obvious example - coma in the corners) - spectral type of star that produced it (take achromatic refractor as obvious example, but also note that it can happen with perfectly corrected scopes as shorter wavelengths "bend" more than longer ones) I think that taking actual PSFs of stars in the image can be useful when trying to correct sub that has guiding / tracking issues. That way you can get blur kernel that is common across respective stars and deconvolve to correct it. I've done something similar some time ago, here are details: Original image (excuse the processing at the time ) : 8" F/6 newtonian on Heq5 without guiding. PE is obvious and produces elongation in RA. After correction Stars are nicer but SNR suffered. Red channel kernels that I got by taking 5 different stars and "comparing" them to gaussian shape. Actual extraction was done via deconvolution (convolution in spatial domain is multiplication in frequency domain so if a*b = c then a = c/b but also b = c/a and therefore you can get blur kernel by deconvolving star with "perfect" star profile - which can be synthetic or extracted from subs that don't have trailing). Comparison of effects of deconvolution on star profile. Sorry about this digression - just wanted to point out that for filtering you can use synthetic profiles and don't need to extract true profiles, although true profiles have their use, but it's best if they are used against same star rather than assumed to be same across the image. 2. This is the key point - PSF introduces a level of correlation between pixel values and that can be exploited in denoising in different ways : 3. Take for example approach that I've come up with: use Sinc filtering or rather windowed sinc (for example lanczos filter) to decompose original image into layers of different frequency components. Deconvolution, or rather frequency restoration in this case would consist of "boosting" high frequencies that got attenuated by blurring. Opposite of that is killing high frequencies that are due to noise only. How can you distinguish the two? There is third component in all of that - by use of SNR. Each astronomical image begins as stack of subs. Let's take simple example - regular average stack. We get average value to be final value of the image, but we can also take standard deviation of each pixel. From that and number of stacked subs we can get "noise" estimation. After you remove background signal from image (wipe background / remove LP offset), and divide the two you will get SNR per pixel. We can use that value to determine if we should "boost" high frequencies (high SNR) or lower them (low SNR). Back to point two - you can use PSF to do denoising in similar way Lucy-Richardson deconvolution works - it is based on Bayesian framework and uses knowledge of PSF distribution to perform deconvolution. You can take again above SNR to estimate signal value and uncertainty in it (assume some distribution, but different than one in LR because you have stack now and it is not simple Poisson+Gaussian - shot noise + read noise) and you have probability associated with PSF. In any case - it is a valid point to use PSF characteristics to both denoise and deconvolve image (and it can be done in the same time as well).
  5. You think that simple "don't" will keep us at bay?
  6. Not sure if that is anything conclusive as GSO makes these scopes for those who order them under their own brand - like TS, Altair Astro and iOptron. Maybe they have options when ordering batch of scopes - "please tick box next to wanted mirror coating type a) 96% b) 99%"
  7. My guess is that they simply put existing products / manufacturing lines together to make new product and it worked well. People expect "fast" systems for imaging, and their RC line due to large central obstruction is primarily considered imaging instrument - so they utilized F/8 design. My guess is that they have machine manufacturing of mirrors and that these machines can be set up for a certain "profile of the curve" - stronger curvature probably requires different tools or something like that. They have hyperbolic secondaries in RC line and they have parabolic primaries for their newtonian line. I just think they put two together to make new Cass line. Cost effective way to get new product out. Designing new F/12 RC line would probably require change of tooling for machines as it would require slower curves on hyperbolic mirrors.
  8. All of RC range is said to have 99% dielectric coatings on both primary and secondary, there is graph of reflectivity included as well on TS - which leads me to believe that is genuine claim. These CC scopes share quite a bit with RC line - same/similar tube, same focuser, both have hyperbolic secondary, and prices are about the same (a bit larger for RC). It's not far fetched that mirrors have been coated to 99% with dielectric coatings as well.
  9. Don't know, I would not label it as one trick pony. It scores rather high on my "quintessential do-it all beginner scope" list You know, when people come and ask - is there a scope that can do it all - a bit of DSO observing, a bit of planetary observing, but I want also to be able to take pictures of those planets and the moon, and you know those nice looking colorful images of galaxies - want those as well. Btw, I live in center of major city and sky is really bright, but regardless I want to see it and don't mind using gadgets - EEVA. And yes, my budge is limited to ... - it's relatively cheap (bested only by newtonian in 6" class and not by much - about 50% or so) - It's compact and light(ish) - can be carried by EQ3 / EQ5 class mount (again reduces price of the mount to fit within budget constraints) - it is 6" scope - meaning rather large aperture - good for dso and planets (light gathering, resolving power) - it has about 40mm fully illuminated field - it can use 2" eyepieces with large field stop, although focal length is large it is not strictly narrow field scope. It can almost frame M45 with GSO superview 38 mm EP - again same illuminated field and slow F/ratio means that it will give enough FOV for imaging: - for planets - it's obviously good for both imaging and observing - with eyepiece projection, I think it is also good for EEVA (something that I'll test out with even slower F/13 scope, hopefully soon).
  10. In any case - that just makes things "worse" - less photons reach eye, yet we are able to see background illuminated for some reason, although above quoted experiment says subjects were able to detect much higher levels only something like 60% of the time.
  11. I doubt it. It will never be only shadow. One side and bottom will be in shadow, but opposite side will in sun light (same as craters - one side is always in sunlight). I think that sunlit side is going to be much brighter than shadow is darker than surroundings (sun is really strong source of light) and hence it is more likely to be seen as bright line vs gray surrounding than it is as dark.
  12. Yes, that was the impression that I got behind the eyepiece. I'm not overly experienced Moon observer (one of reasons for getting this scope was to spend more time doing lunar observing), and that might have been the reason behind it. When I looked the airy disk - it looked about right for aperture at that magnification - it was all there - little "ball" in the center and first and second diffraction rings. First being broken in 2-3 segments almost all the time and second just glimpsed. Somehow sharpness of lunar view was greater than I would expect for that aperture at such magnification (judging by airy pattern seen). I've heard that the Moon can take crazy mags and still give good image, but never tried myself. I can only explain it by very large contrast range on the moon as a target and blur that comes from airy pattern is not enough to make image soft - that is what I gathered from producing above image that should resemble view that I saw. In principle that is what I saw in terms of detail, but position of the sun - how shadows and highlights combine and brightness of the target made it look much more alive and sharper than the image depicts.
  13. Well, I'm in favor of theory working, but view was so nice and sharp that I started questioning it
  14. Ok, resolution matches what I saw This is what it should look like (disregard bright edges - that is convolution artifact) at x240 power with 4" scope if viewed on 23" 1920x1080 computer monitor at 91cm away. (this is based on 0.265 pixel size).
  15. Seems so , this is my first time with a mak and I'm rather happy of what it delivered. In fact, it went beyond my expectations. Must make comparison one night - 4" achro (F/10) vs 4" mak to see the differences. Hopefully, seeing will be as good as tonight (before it started getting worse, of course). Indeed. I think that simulation is in order - I can download image and blur it by 4" 30% CO aperture and make apparent size similar to that of x240 (taking into account average pixel size and average viewing distance). That way we can see what theory says it should look like and if it can be easily resolved.
  16. I was just out and I had a first light with my Mak102 - quick lunar session. Seeing was rather good but quickly started to deteriorate as temperature started to drop. It was also first light for few new high power eyepieces. Scope is really sharp and although I'm not lunar observer - I decided to try to push the little scope (and my eyes) and look for distinct feature that is both small and that I could remember to later check for size / angular size to see how sharp my scope really is (at the eyepiece I thought it was rather sharp and surprised me). Here is what I saw: It was very distinct view - almost like in second image - there is no mistaking it for something else. What surprised me is that these two features are : about 1.9km in diameter and separated by 4.6km. At distance of 384000km that gives: 1.02" and 2.47". According to Dawes, resolution limit of 4" scope is 1.137" while Rayleigh gives 1.26" as resolution limit. Now image did not look like it was about to break or anything like that - it was not blurry, two peaks were distinct features with quite a bit of space between them (like 2-3 times their size). In fact second image really depicts what I saw. What is your experience like in terms of seeing features of certain size - is it in line with "resolution limit" of telescope or do you feel that there is more "oomph" to scope then numbers suggest?
  17. Just returned from very nice grab&go session (first this year, and I'm rather happy that I had such an early start - it's second of January and I already have first light and one lunar session ), and I made an observation that is relevant to this: Using 4" scope - Mak102 and 5.5mm eyepiece under my light polluted skies - which are usually SQM 18.5 but it was quarter moon out, so we need to modify that SQM rating - without dark adaptation I was able to see difference between field stop black and sky "black" (very close but distinguishable). Let's say that full moon knocks down about 2 magnitudes, so 7 days old moon (21% bright as full moon) is going to knock down about 0.4mag. We can say that SQM reading was about mag18. Mag 18 = ~0.06183382 photons per second per cm^2. Scope is 102mm, CO is 31mm, two mirrors at 94% I presume, and two glass/air surfaces at 99.5% give clear aperture of ~64.9 cm^2. This means that SQM18 produces ~4.0117 photon per second per arcsec^2 with this scope. FL is 1300 and eyepiece is 5.5 - that gives x236.4 magnification, so one arc second is magnified to 236.4 arc seconds or 3.94 arc minutes. Flux per arc minute at eyepiece is therefore 4.0117 / (3.94)^2 = ~0.2584 photons per second per arc minute. This is rather interesting that eye can detect such small photon flux at eyepiece. Btw, Mak102 is super sharp. I love both ES82 6.7mm and ES62 5.5mm and AzGti is very nice in use. There is little backlash in both az and alt (it is actually quite large, but I'm sure I'll be able to tune it out or at least reduce it to reasonable level).
  18. Ok, I'm going thru this again, and it does not make sense. Above we concluded that we should be able to detect 19 photons per arc minute per second kind of flux as a threshold flux or rather we should not be able to tell less than that. Let's see what it means for 8" dob under SQM 21 skies. SQM21 skies mean photon flux of 0.0039 photons per second per cm^2 surface from 1x1 arcsec (based on mag0 star emitting roughly 980,000 photons per second per cm^2 - often quoted as roughly one million photons). For 8" dob with 94% reflectivity and 26% CO that boils down to 266.65 cm^2 that means 1.039935 photon / s / arcsec^2. At x60 magnification - 1 arc second will be equal to 1 arc minute (there is 60" in 1'). This means that at x60 magnification background sky will glow with about 1.04 photons / s / arc minute. That is x19 less photons than we said was a threshold value, yet I believe one can see that above skies are brighter than no light (field stop). Mag 18.5 skies are x10 in photon count (2.5 magnitudes - that is x10 in intensity) - that means 10.4 photons/s/arcmin, still less than 19photons/s/arcmin - yet we can clearly see background sky as being bright at x60. Either I'm wrong in above calculations, or initial premise is wrong about number of photons being threshold.
  19. I think I figured out where problem might be with above images. Besides photon floor estimation - I assumed that monitor intensity levels are threshold levels for detecting brightness difference. That means that smallest difference between color #000000 and #010101 (or black and grey of #1 out of 255) is equal to difference in brightness that we can see. But now thinking about it - there is no solid reason to believe that. I just made image composed out of three colors - 0, 1 and 2 pixel values (out of 255) and difference is clearly seen - I did not have to really strain to see so might not be just noticeable difference. In fact contrast ratio of human eye is said to be 1000:1 (most good displays have at least that much of static contrast ratio). I guess I should go with that number instead - smallest detectable difference (around 3% in photon flux) should be equal to 0.001 after gamma correction (1/1000 instead of 1/256). Let's see what will I get when I take into account these corrections. I've found a sketch online with good info (source cloudynights) - will try to match that level of detail: I don't have info on altitude of object or transparency - but I guess we can work with that. That is 16" aperture with SQM20.4. Maybe I could also switch to SDSS data or some other data source so we can have more objects to compare with actual observing reports.
  20. Sure, I'll look into adding calibration information into image - there is space in field stop (that is supposed to be black, but I might include Eigengrau effect into it?). I might throw in gamma checker as well?
  21. Well, for one thing - threshold is over estimated in above case. I took 7 photons per one arc minute squared to be threshold for light sensitivity (anything less than that will be effectively black in the image) - but that might not be true figure. According to this https://www.ukessays.com/essays/psychology/the-experiment-of-hecht-shlaer-and-pirenne-psychology-essay.php This is for 10 arc minute diameter disk and 100ms flashes. 10 arc minute diameter disk actually has ~78.54 arc minutes squared. If we take upper bound from above experiment and say 150 photons and divide the two, we get 1.9 photons per arc minute squared should be threshold - that is per 100ms or 0.1s, so it translates into photon flux of 19 photons per second per arc minute squared. In fact - we can do empirical analysis of that - we can track the following: at what magnification, telescope aperture and SQM reading we no longer see difference between sky background color and field stop (neither will be completely black, but that has to do with brain and not number of photons - see https://en.wikipedia.org/wiki/Eigengrau for explanation).
  22. Since I did not change anything - what changed? Your viewing conditions? What scope did you use on your Bortle 4 site, same as sim - 8" dob?
  23. So you are saying that you think first image of 18.5 mag is "over estimated" - in sense it shows more of galaxy then you would expect and in second case it is under estimated - it is less bright view than you would expect?
  24. I'll need a bit of clarifying that - I have no idea what Spinal Tap moment means In any case, if you are seeing very dark images - it might be calibration of your monitor. With a casual glance of Bortle 4 image on my monitor - I can see 20+ stars on that image. I did worry about star rendition, as these tend to be tough to do on images vs real life experience, but this calculator: http://www.cruxis.com/scope/limitingmagnitude.htm coupled with star magnitude from Stellarium seem to give good results on above images. This sort of image will poorly represent some stars because human vision and sensor don't have same sensitivity over spectrum and sensors tend to capture more light in blue / red regions then humans can see in low light conditions. I do to sort of see them, but how does that compare to your observing experience? 8" scope in red zone. My experience so far was only cores and no hints of arms, so above image is rather optimistic in that regard.
  25. For anyone interested in simulation of effect of SQM reading - and other related bits - here is separate thread:
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.