Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

vlaiv

Members
  • Posts

    13,030
  • Joined

  • Last visited

  • Days Won

    11

Everything posted by vlaiv

  1. Why not consider a bit larger scope instead - that will give you target focal length (and as a bonus collect more light). Something like this is rather good and won't break the bank (in fact there are few "second hand" items on sale now - showroom pieces that were on display but are treated as second hand and reduced in price): https://www.teleskop-express.de/shop/product_info.php/info/p11871_NEU--TS-Optics-PHOTOLINE-115-mm-f-7-Triplet-Apo---2-5--RAP-focuser.html That scope is 800mm focal length, but when you add suitable FF/FR (I would consider Riccardi x0.75 FF/FR) - you will get 600mm and F/5.2 setup I don't own this scope, so I can't speak as owner, but I do own TS80 APO and know what sort of fit&finish is on photoline scopes and I can say that 2.5" R&P is very adequate focuser for imaging and you can easily access 10:1 shaft to attach motor focuser (or maybe use belted connection).
  2. While testing proper EP projection today, I also managed to test x0.5 reductor again in more detail. - I added 5mm extension ring to get closer to target F/4.5 instead of F/5.1 from first post. - I also wanted to check level of vignetting in this configuration - in first test, image seemed a bit blurry so I wanted to check sharpness of this setup - both on axis and in corners (because EP projection showed severe degradation in corners). Here are results: Reduced image shows nice definition across the field. Tile measurement shows reduction factor to be around - x0.36, or F/4.7 (closer to target F/4.5) - here is image of tiles cropped and 100% zoom (but super pixel debayer - not debayered but rather just binned): I found setup a bit difficult to focus, and I don't think it is because it is fast. Maybe there is some issue with tilt - as nose piece with reductor can't be fully inserted into 1.25" receptacle on scope - there is baffle on tube that prevents this. I might try threaded connection in the future. I checked corners as well (slew mount - no change in focus until tiles that were in center of FOV are now in the corner), and they seem to be as sharp as center: and other corner Here is vignetting: It shows that field is not quite centered on sensor. This is 5% steps - white is 100%, and corners are at 55%. Actually right side is at 45% because vignetting is slightly moved to the right on sensor. I was shooting at cloudy skies and some of center illumination is actually shadows on clouds that were in focus rather than true flat field - but this gives idea of what vignetting is - it indeed falls of to about 50% at the edges and flat fielding is necessary. I wanted to get idea of how sharp this setup really is - so I turned the scope to very distant high contrast target: This is top of TV broadcast tower - it is roughly 10.7km away. Here is what 100% center crop looks like: Now this might seem like rather good image of something that is 10.7km away. It might even look like it is due to ground seeing and that you can't really get good sharp image of it. That is in fact not true - this scope is very sharp. I tested it at night - it is collimated properly and it gave very sharp images of Moon at x236 power. I decided to take again this tower in prime focus. Here is image at 100% without binning / debayering. Bayer matrix is still visible in this image, but presented like this - it is actually at critical sampling for this scope - best resolution that this scope can provide - and it shows Yes - antennae elements can be clearly seen (and sharp? right?) all along the central column. In fact I think I can make out ladders used to climb to top of this tower at the back side. Again - this is 10.7 km away. This little scope is truly sharp In any case - wanted to compare the two so I reduced image at prime focus to rough size of that taken with reducer: Difference is clearly visible - left is prime focus reduced and right is image with reducer. Reducer introduces quite a bit of blur. I wanted to see what sort of blur we are talking about here, so I tried gaussian blur with different sigma, and it looks like sigma of 1.9px is about right: Now I know what sort of simulations to run in order to asses what sharpness will I get out of EEVA images. I know - it is far easier to just go outside and do EEVA session instead of running sims - but one needs clear skies in order to do that BTW, I'll try threaded connection to see if it will improve things and also reversing lens in the cell. I did that once before when I used this reducer with F/6 scope - and it helped. Maybe at F/13 better image will be from lens facing other way around?
  3. Small update on this thread. I managed to test out proper eyepiece configuration, and I'm must say I'm rather underwhelmed by how it performed. Just a bit of background first - I used TS eyepiece projection adapter: It is standard item that has compression ring (and 4 screws) and it is quite wide inside. In fact there are three models I believe - for different eyepiece barrel diameters. I ordered one to fit my 32mm plossl (by GSO). Adapter fits fine except - there is an issue with it. Eyepiece barrel has a slope at the top: while adapter fits with flat surface. This makes EP hard to center. With adapter - you get 3 different self adhesive band - and no manual, so I just took one that fits best and inserted it into adapter. This makes EP body fit nicely within adapter but does not remove issue with tilt completely (although it is reduced because you don't need to slide adapter all the way down). This creates repeatability problem - it is hard to position camera at proper / same distance each time. It is also a bit problematic to center eyepiece in adapter as you have 4 screws that you need to "balance" (pretty much like collimation of scope / adjusting finder or similar). In any case - here is EP with adapter attached: Centering is done by making sure - there is same amount of space between eyepiece top and sides of adapter (eyeballing it). Other than centering and positioning issues - adapter works quite well. It easily carries camera attached to eyepiece and everything feels sturdy enough. Attached to scope it does create balance issues but for this test AzGti was not complaining: I ran some indoor tests with this configuration - where distance between eyepiece and sensor is provided by extension T2 ring and at that time 15mm extension seemed to provide wanted reduction factor when used in "proper" EP projection configuration. In the meantime I forgot which extension I used, so I started first with 10mm. This pushed system to the limit - I'm not sure if I was in fact able to properly focus as I reached end of focus range when I took test image: At this close distance - reduction factor is rather severe. I then remembered that I had actually determined that 15mm extension is proper one, so I switched it and here is what FOV looks like with that: (images are binned x2 but not debayered - hence mono look. I plan on using super pixel mode so actual resolution will be same as this - although this image is reduced to 33% to show whole FOV and save on upload image size). This is center crop. According to measured tiles I in fact got x0.25 reduction factor for F/3.3. I'm not sure why is this - either I was wrong in my indoor tests (although I tested it on a ruler and had 2.4cm map to 0.89cm which gives x0.37 reduction or F/4.8) or there is simply issue of repeatability of adapter. At these extreme reduction factors even 1mm of distance change can have dramatic effect on change of reduction factor. I was also less then impressed with edge of the field in this setup. Maybe at F/4.5 - F/4.8 things would be different, but I'm not holding my breath for that. Here are roof tiles at the edge of the field (I just slewed scope so that tiles that were in focus in center of FOV become positioned in corners): other corner I shot two images in opposite corners just to avoid this being tilt issue due to adapter centering - but both corners show same thing - blurring is so severe that we can distinguish tiles any more. This shows that proper ep projection configuration is also - "no go" in this case. What remains is afocal with 12mm lens - I just need to wait for lens to test that out.
  4. Yes it should be if it is adjustable. I know that ASI cameras have that - not sure if QHY ones do as well. You will be able to test your darks for proper offset without shooting other subs and if you still have raw darks from that session - you can test if it is affected by wrong offset. You won't be able to fix that session though. Changing offset requires redoing your calibration subs and darks taken at different offset (one that is not affected) - will not match lights. If offset is issue - lights are "affected" as well (not really affected because LP provides offset and there is great chance you won't have clipping in lights but if matching darks are affected - it will create artifact regardless).
  5. Stellarium shows very nice iridium flare from my location as a mark to start observing and it indeed looks lovely in my new Mak102 with ES62 5.5mm
  6. Fast download and power could be cause of issues with CCD sensor - above is CMOS and should not be susceptible to either. It is designed to give really fast download rates - like 30fps, and it does not have separate ADC that can suffer from power fluctuations - each pixel has its own ADC unit.
  7. Looks like offset issue to me. Can you take one of your darks and look at its histogram? There might be clipping to the left. If there is, or if histogram is too close to minimum value - can you increase offset in your camera driver options?
  8. I seem to understand this topic a bit (not this actual topic - but deconvolution and psfs and image restoration) and I can tell you that you are on to something. Not sure what your actual approach is, but yes: 1. you can extract PSF from image and stack of images will improve SNR in PSF 2. you can use PSF to reduce the noise 3. noise reduction and sharpening can be part of single filter Let's briefly address these points: 1. PSF - not sure if you should attempt to use extracted PSF for this application, or just assume certain analytical form of PSF - like gaussian. Actual PSF of star will depend on: - position in the field (take newtonian scope as obvious example - coma in the corners) - spectral type of star that produced it (take achromatic refractor as obvious example, but also note that it can happen with perfectly corrected scopes as shorter wavelengths "bend" more than longer ones) I think that taking actual PSFs of stars in the image can be useful when trying to correct sub that has guiding / tracking issues. That way you can get blur kernel that is common across respective stars and deconvolve to correct it. I've done something similar some time ago, here are details: Original image (excuse the processing at the time ) : 8" F/6 newtonian on Heq5 without guiding. PE is obvious and produces elongation in RA. After correction Stars are nicer but SNR suffered. Red channel kernels that I got by taking 5 different stars and "comparing" them to gaussian shape. Actual extraction was done via deconvolution (convolution in spatial domain is multiplication in frequency domain so if a*b = c then a = c/b but also b = c/a and therefore you can get blur kernel by deconvolving star with "perfect" star profile - which can be synthetic or extracted from subs that don't have trailing). Comparison of effects of deconvolution on star profile. Sorry about this digression - just wanted to point out that for filtering you can use synthetic profiles and don't need to extract true profiles, although true profiles have their use, but it's best if they are used against same star rather than assumed to be same across the image. 2. This is the key point - PSF introduces a level of correlation between pixel values and that can be exploited in denoising in different ways : 3. Take for example approach that I've come up with: use Sinc filtering or rather windowed sinc (for example lanczos filter) to decompose original image into layers of different frequency components. Deconvolution, or rather frequency restoration in this case would consist of "boosting" high frequencies that got attenuated by blurring. Opposite of that is killing high frequencies that are due to noise only. How can you distinguish the two? There is third component in all of that - by use of SNR. Each astronomical image begins as stack of subs. Let's take simple example - regular average stack. We get average value to be final value of the image, but we can also take standard deviation of each pixel. From that and number of stacked subs we can get "noise" estimation. After you remove background signal from image (wipe background / remove LP offset), and divide the two you will get SNR per pixel. We can use that value to determine if we should "boost" high frequencies (high SNR) or lower them (low SNR). Back to point two - you can use PSF to do denoising in similar way Lucy-Richardson deconvolution works - it is based on Bayesian framework and uses knowledge of PSF distribution to perform deconvolution. You can take again above SNR to estimate signal value and uncertainty in it (assume some distribution, but different than one in LR because you have stack now and it is not simple Poisson+Gaussian - shot noise + read noise) and you have probability associated with PSF. In any case - it is a valid point to use PSF characteristics to both denoise and deconvolve image (and it can be done in the same time as well).
  9. You think that simple "don't" will keep us at bay?
  10. Not sure if that is anything conclusive as GSO makes these scopes for those who order them under their own brand - like TS, Altair Astro and iOptron. Maybe they have options when ordering batch of scopes - "please tick box next to wanted mirror coating type a) 96% b) 99%"
  11. My guess is that they simply put existing products / manufacturing lines together to make new product and it worked well. People expect "fast" systems for imaging, and their RC line due to large central obstruction is primarily considered imaging instrument - so they utilized F/8 design. My guess is that they have machine manufacturing of mirrors and that these machines can be set up for a certain "profile of the curve" - stronger curvature probably requires different tools or something like that. They have hyperbolic secondaries in RC line and they have parabolic primaries for their newtonian line. I just think they put two together to make new Cass line. Cost effective way to get new product out. Designing new F/12 RC line would probably require change of tooling for machines as it would require slower curves on hyperbolic mirrors.
  12. All of RC range is said to have 99% dielectric coatings on both primary and secondary, there is graph of reflectivity included as well on TS - which leads me to believe that is genuine claim. These CC scopes share quite a bit with RC line - same/similar tube, same focuser, both have hyperbolic secondary, and prices are about the same (a bit larger for RC). It's not far fetched that mirrors have been coated to 99% with dielectric coatings as well.
  13. Don't know, I would not label it as one trick pony. It scores rather high on my "quintessential do-it all beginner scope" list You know, when people come and ask - is there a scope that can do it all - a bit of DSO observing, a bit of planetary observing, but I want also to be able to take pictures of those planets and the moon, and you know those nice looking colorful images of galaxies - want those as well. Btw, I live in center of major city and sky is really bright, but regardless I want to see it and don't mind using gadgets - EEVA. And yes, my budge is limited to ... - it's relatively cheap (bested only by newtonian in 6" class and not by much - about 50% or so) - It's compact and light(ish) - can be carried by EQ3 / EQ5 class mount (again reduces price of the mount to fit within budget constraints) - it is 6" scope - meaning rather large aperture - good for dso and planets (light gathering, resolving power) - it has about 40mm fully illuminated field - it can use 2" eyepieces with large field stop, although focal length is large it is not strictly narrow field scope. It can almost frame M45 with GSO superview 38 mm EP - again same illuminated field and slow F/ratio means that it will give enough FOV for imaging: - for planets - it's obviously good for both imaging and observing - with eyepiece projection, I think it is also good for EEVA (something that I'll test out with even slower F/13 scope, hopefully soon).
  14. In any case - that just makes things "worse" - less photons reach eye, yet we are able to see background illuminated for some reason, although above quoted experiment says subjects were able to detect much higher levels only something like 60% of the time.
  15. I doubt it. It will never be only shadow. One side and bottom will be in shadow, but opposite side will in sun light (same as craters - one side is always in sunlight). I think that sunlit side is going to be much brighter than shadow is darker than surroundings (sun is really strong source of light) and hence it is more likely to be seen as bright line vs gray surrounding than it is as dark.
  16. Yes, that was the impression that I got behind the eyepiece. I'm not overly experienced Moon observer (one of reasons for getting this scope was to spend more time doing lunar observing), and that might have been the reason behind it. When I looked the airy disk - it looked about right for aperture at that magnification - it was all there - little "ball" in the center and first and second diffraction rings. First being broken in 2-3 segments almost all the time and second just glimpsed. Somehow sharpness of lunar view was greater than I would expect for that aperture at such magnification (judging by airy pattern seen). I've heard that the Moon can take crazy mags and still give good image, but never tried myself. I can only explain it by very large contrast range on the moon as a target and blur that comes from airy pattern is not enough to make image soft - that is what I gathered from producing above image that should resemble view that I saw. In principle that is what I saw in terms of detail, but position of the sun - how shadows and highlights combine and brightness of the target made it look much more alive and sharper than the image depicts.
  17. Well, I'm in favor of theory working, but view was so nice and sharp that I started questioning it
  18. Ok, resolution matches what I saw This is what it should look like (disregard bright edges - that is convolution artifact) at x240 power with 4" scope if viewed on 23" 1920x1080 computer monitor at 91cm away. (this is based on 0.265 pixel size).
  19. Seems so , this is my first time with a mak and I'm rather happy of what it delivered. In fact, it went beyond my expectations. Must make comparison one night - 4" achro (F/10) vs 4" mak to see the differences. Hopefully, seeing will be as good as tonight (before it started getting worse, of course). Indeed. I think that simulation is in order - I can download image and blur it by 4" 30% CO aperture and make apparent size similar to that of x240 (taking into account average pixel size and average viewing distance). That way we can see what theory says it should look like and if it can be easily resolved.
  20. I was just out and I had a first light with my Mak102 - quick lunar session. Seeing was rather good but quickly started to deteriorate as temperature started to drop. It was also first light for few new high power eyepieces. Scope is really sharp and although I'm not lunar observer - I decided to try to push the little scope (and my eyes) and look for distinct feature that is both small and that I could remember to later check for size / angular size to see how sharp my scope really is (at the eyepiece I thought it was rather sharp and surprised me). Here is what I saw: It was very distinct view - almost like in second image - there is no mistaking it for something else. What surprised me is that these two features are : about 1.9km in diameter and separated by 4.6km. At distance of 384000km that gives: 1.02" and 2.47". According to Dawes, resolution limit of 4" scope is 1.137" while Rayleigh gives 1.26" as resolution limit. Now image did not look like it was about to break or anything like that - it was not blurry, two peaks were distinct features with quite a bit of space between them (like 2-3 times their size). In fact second image really depicts what I saw. What is your experience like in terms of seeing features of certain size - is it in line with "resolution limit" of telescope or do you feel that there is more "oomph" to scope then numbers suggest?
  21. Just returned from very nice grab&go session (first this year, and I'm rather happy that I had such an early start - it's second of January and I already have first light and one lunar session ), and I made an observation that is relevant to this: Using 4" scope - Mak102 and 5.5mm eyepiece under my light polluted skies - which are usually SQM 18.5 but it was quarter moon out, so we need to modify that SQM rating - without dark adaptation I was able to see difference between field stop black and sky "black" (very close but distinguishable). Let's say that full moon knocks down about 2 magnitudes, so 7 days old moon (21% bright as full moon) is going to knock down about 0.4mag. We can say that SQM reading was about mag18. Mag 18 = ~0.06183382 photons per second per cm^2. Scope is 102mm, CO is 31mm, two mirrors at 94% I presume, and two glass/air surfaces at 99.5% give clear aperture of ~64.9 cm^2. This means that SQM18 produces ~4.0117 photon per second per arcsec^2 with this scope. FL is 1300 and eyepiece is 5.5 - that gives x236.4 magnification, so one arc second is magnified to 236.4 arc seconds or 3.94 arc minutes. Flux per arc minute at eyepiece is therefore 4.0117 / (3.94)^2 = ~0.2584 photons per second per arc minute. This is rather interesting that eye can detect such small photon flux at eyepiece. Btw, Mak102 is super sharp. I love both ES82 6.7mm and ES62 5.5mm and AzGti is very nice in use. There is little backlash in both az and alt (it is actually quite large, but I'm sure I'll be able to tune it out or at least reduce it to reasonable level).
  22. Ok, I'm going thru this again, and it does not make sense. Above we concluded that we should be able to detect 19 photons per arc minute per second kind of flux as a threshold flux or rather we should not be able to tell less than that. Let's see what it means for 8" dob under SQM 21 skies. SQM21 skies mean photon flux of 0.0039 photons per second per cm^2 surface from 1x1 arcsec (based on mag0 star emitting roughly 980,000 photons per second per cm^2 - often quoted as roughly one million photons). For 8" dob with 94% reflectivity and 26% CO that boils down to 266.65 cm^2 that means 1.039935 photon / s / arcsec^2. At x60 magnification - 1 arc second will be equal to 1 arc minute (there is 60" in 1'). This means that at x60 magnification background sky will glow with about 1.04 photons / s / arc minute. That is x19 less photons than we said was a threshold value, yet I believe one can see that above skies are brighter than no light (field stop). Mag 18.5 skies are x10 in photon count (2.5 magnitudes - that is x10 in intensity) - that means 10.4 photons/s/arcmin, still less than 19photons/s/arcmin - yet we can clearly see background sky as being bright at x60. Either I'm wrong in above calculations, or initial premise is wrong about number of photons being threshold.
  23. I think I figured out where problem might be with above images. Besides photon floor estimation - I assumed that monitor intensity levels are threshold levels for detecting brightness difference. That means that smallest difference between color #000000 and #010101 (or black and grey of #1 out of 255) is equal to difference in brightness that we can see. But now thinking about it - there is no solid reason to believe that. I just made image composed out of three colors - 0, 1 and 2 pixel values (out of 255) and difference is clearly seen - I did not have to really strain to see so might not be just noticeable difference. In fact contrast ratio of human eye is said to be 1000:1 (most good displays have at least that much of static contrast ratio). I guess I should go with that number instead - smallest detectable difference (around 3% in photon flux) should be equal to 0.001 after gamma correction (1/1000 instead of 1/256). Let's see what will I get when I take into account these corrections. I've found a sketch online with good info (source cloudynights) - will try to match that level of detail: I don't have info on altitude of object or transparency - but I guess we can work with that. That is 16" aperture with SQM20.4. Maybe I could also switch to SDSS data or some other data source so we can have more objects to compare with actual observing reports.
  24. Sure, I'll look into adding calibration information into image - there is space in field stop (that is supposed to be black, but I might include Eigengrau effect into it?). I might throw in gamma checker as well?
  25. Well, for one thing - threshold is over estimated in above case. I took 7 photons per one arc minute squared to be threshold for light sensitivity (anything less than that will be effectively black in the image) - but that might not be true figure. According to this https://www.ukessays.com/essays/psychology/the-experiment-of-hecht-shlaer-and-pirenne-psychology-essay.php This is for 10 arc minute diameter disk and 100ms flashes. 10 arc minute diameter disk actually has ~78.54 arc minutes squared. If we take upper bound from above experiment and say 150 photons and divide the two, we get 1.9 photons per arc minute squared should be threshold - that is per 100ms or 0.1s, so it translates into photon flux of 19 photons per second per arc minute squared. In fact - we can do empirical analysis of that - we can track the following: at what magnification, telescope aperture and SQM reading we no longer see difference between sky background color and field stop (neither will be completely black, but that has to do with brain and not number of photons - see https://en.wikipedia.org/wiki/Eigengrau for explanation).
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.