Jump to content

SkySurveyBanner.jpg.21855908fce40597655603b6c9af720d.jpg

Deflavio

Members
  • Posts

    44
  • Joined

  • Last visited

Reputation

49 Excellent

Profile Information

  • Location
    Hampton, London, UK

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Interesting experiment, why not to increase photons by binning up a bit? Flavio
  2. When you say "it seems like the mount points, stops, waits and then moves" does the noise from the mount motors also stops, changes in intensity or stays the same? I'm just checking if some of your gear is slipping, maybe two years without use may have loosened something in your gears. Other possibility, what is the voltage range for this mount? You may want to make sure you are not just close to the limit. I suggest to post in the equipment/mount section. I'm sure you will get more answers. Flavio
  3. Eh, that would be really nice. This could be the base for a full EAA/AP simulator ..including noise, optics and sky, etc. Ok, I'm dreaming now but I saw some time ago Aberrator software. It was nice to simulate the effect of optics on planets and doubles, no noise though. Regarding SNR, I think it should be possible to adapt the point source algorithm to extended object by changing how the number of photon are computed for the pixel, the rest hopefully should be similar. I'll try to get more reading on it. Flavio
  4. Hi Martin, I think the reason Borg+Lodestart results are strange for FWHM 2 and 3 is because you are well past the 0.83xFWHM threshold (i.e. when you are under-sampled but SNR is maximum according to Raab's paper) and your pixel is now bigger than the whole star so you don't have benefit by going bigger than that. Specifically with FWHM=2,3,4 you are sampling at 0.44x, 0.66x and 0.89x FWHM. These results replicate pretty well what said in the paper.[Just checking, FWHM =2 and 3 seem the same plot but I can follow the pattern on the rest]. Also it is interesting that by using larger FWHMs (e.g. 4,5) overall SNR decrease across all configurations because we are losing more photons from the central pixel (which is what this algorithm is only taking into account) but more or less the Borg still seems to maintain the same SNR (~8/6 at t=300sec) or with only just a slightly decrease. This is because is still able to mostly swallow the whole star. I think these results are really useful to calculate pixel-SNR or the limit magnitude but now I'm more convinced they don't tell the whole story for extended objects. In the examples above the calculation of the central pixel is ignoring what is happening to the surrounding pixels. When we have a single point source and increasingly larger FWHM we spread photons more and lose them outside the central pixel and SNR is going down. That's fine. In the presence of extended objects with increasingly larger FWHM, photons are spread/blurred from all voxels but now if I consider a pixel within this extended object it will both spread photons but also receive for surrounding voxels. SNR behaviour of extended objects is different than a point source. In a way, I see now, this is a very convoluted way to explain blurring. 😅 F
  5. Thanks Martin! I think your last message finally cleared my doubts. SNR is aperture, exposure and QE, yep I fully agree with this. What got me confused was the story of the "equivalent" time required by equivalent f-ratio systems I read around. I see now, this works IF I don't change pixel size. Like a photographer changing lenses but keeping the same camera body. Longer focal lengths give bigger pictures on the focal plane spreading more the photons on a larger surface but density will stay the same for constant focal-ratio. So yes, by using the same pixel size, you can get same SNR with smaller apertures but you have to trade your resolution by using shorter focal lengths...As you said in your point 4. On the contrary, if we want keep the same resolution (using smaller pixels) we have to sacrifice SNR or increase exposure. Great now all fits. So, going back at the original post, small pixel sensors can help to achieve higher resolutions with shorter focal lenses but at the price of SNR or exposure time. For an ideal travel EAA setup, I guess it all comes down how much longer I'm ok to integrate. Any quick way to calculate how longer exposure would be compared to a reference scope? Ps. Why FOV? I'm thinking about still relative small objects like galaxies and planetary nebulas...
  6. I have not used myself ASIair but looking at few videos on youtube it seems that all you need to get your "evoscope" functionalities is just install the ASIair, connect mount and camera(s) and install the app on your ipad. Looking at screenshots the equivalent of ASIstudio is somehow already built inside ASIair and the app so no need to install on any computer. That's in theory, but in practice I would make sure that your mount is working fine with ASIAIR and that you wifi signal is good enough! Also you may want to consider a ZWO focuser for this to be completely automatic. Regarding StarSense, I guess you might be able to align first with Starsense and then sync the AsiAir but you want to check that is possible. You can ask someone here using ASiair if I can sync to existing had-controller status or maybe ask directly ZWO about StarSense... F
  7. Hi Mike, For 1.5x FWHM I mean a sampling rate that is 1.5 times higher than the actual Full Width Half Maximum of your final "star disk". This star disk would be the combined effects of seeing (main factor), Airy disk and tracking errors. In practice, this means that you need to cover the central part (or most luminous portion) of your star with at least 1.5 pixels. In the tables I posted above, I just took 8 different FWHMs to represent different sky conditions or star sizes and tried to see what would be the ideal focal length given 3 different pixel sizes. If I look at your set up, a C11 at f6.3 on a 6.45um pixel and 2x2 binning gives a sampling resolution of 1.51 arcsec/pixel. The 15'' at f4.5 gives a similar 1.55 arcsec/pixel and they seem really good sampling. If we consider as "typical" sky condition or star shapes FWHM of 2 to 3 arcsec you are nicely going from 1.3x to 2x sampling rate in most of the cases. Interesting that you see a difference between your two setups. Could it be just the better SNR helping revealing more details faster or maybe the different obstruction in the scopes reducing slightly contrast on C11? F
  8. wow that's a lot of hours in a year. I like to do the "theory" but I definitely do less practice 😅 I completely agree regarding bigger pixels and seeing condition. Given a fixed focal length (and aperture) the larger sensor would go always deeper. However, what about details and object features/contrast? As you say, average or bad sky conditions would actually push to use binning but the same angular sampling can be achieved using a shorter focal length. Even more if I also consider a smaller pixel size. I'm curious to know if someone has tried this. Clearly the C11 or 15’’ will go deeper but what about objects with moderate brightness? Will be details there comparable if sampling resolution is the same? SNR will be lower but as I said I can wait a bit more if more features are emerging... F
  9. Hi Martin, Very interesting paper, it covers quite lot. Interesting that max SNR is achieved at 0.83x FWHM assuming a centred voxel...and accepting bad under-sampling and aliasing of course, but at least, no need to go further than that for more SNR. It also confirms that to preserve details and still have good SNR, sampling should be between 1.5 to 2.0x FWHM which is in line with what has been said before on this and other forums. Unfortunately, the formula and the script are as you said for point sources. At fix angular resolution (pixel value in your python script) SNR goes up with aperture. That is what I would expect for detecting fainter stars or how to reach deeper magnitudes... we need bigger apertures. This is probably the right definition for pixel-SNR, I should use. Probably "SNR for extended object" is not the right term for what I have in mind. I found quite a few "heated" discussion around on this issue, about SNR and the f-ratio myth and I may be confused... In short, if I keep the f-ratio constant, by increasing both aperture and focal length at the same time, I would get the same surface brightness on my image simply because the increased photons for the large aperture are spread on an equivalent a larger surface on the focal plane by the longer focal length. It makes sense. What I'm not sure is if I have the same surface brightness, do I have also same contrast or ability to see minute features within extended objects? Also, to have this same brightness do I need to fix to same pixel size or the same angular resolution? I'm bit stuck on this 🤔. I guess my main point of the post is, if I use a smaller pixel AND a shorter focal length how close I can get to results obtained to larger pixels on longer focal length? It seems I can match resolution, not SNR but same surface brightness...and details? Regarding to the other points, I agree, I would say time of all parameters is the one I can be more relaxed with. If an object is interesting I don't time how long to stay on it. I'll be happy to trade a bit more averaging if I can just pull out more features. F
  10. Hi Everyone, Sorry for the long post but it's cloudy outside...😅 I have started thinking about different options on how to optimise/improve resolution while still using moderate focal lengths for travel. At the end, it seems that it all comes down to the final arcsec/pixel value and a good sampling of the final FWHM (combining seeing, AIRY disk and tracking errors). A sampling rate between 1.6x to 2x of the FWHM seems “ideal” according recent posts from @vlaiv. Or maybe even less considering the more noisy nature EAA images compared to AP. I guess many high frequency details are probably lost anyway in the noise. Regardless, it seems then that smaller pixel sensors matched with moderate to short focal lengths would be an interesting combination giving the same sampling resolution of larger sensor matched with longer focal lengths. Pros are obvious: shorter focal length scopes are easier to handle, transport and save weight. Also, the use of smaller pixel sizes can be a way to minimise the intrinsic loss or resolution from the Bayer filter of colour cameras. I.e. a 2.4um sensor colour would have “at least” the same resolution as a 4.8 um mono sensor. Finally, if the FWHM of the night is not good enough a smaller pixel can also allow a wider range of binning combinations to better match the best resolution and possibly boosting SNR, no? What are the cons then? I assume that smaller pixel size also means optics quality and focus issues are more evident when working un-binned under good/excellent seeing. Being this setup for travel then there may be more chances to end up under a good sky… Also I guess tracking errors should be at the same level of setups using bigger sensors and longer focal lengths, right? So lighter equipment yes but still good tracking requirements. To be honest I’m not sure how critical this is for EAA with 5/10 secs subs but I should at least try to quantify how much error is ok. No idea how, though... Any other cons? Maybe SNR? Not sure but, if I have got this right (again not really sure), this should also not be an issue. With equivalent scope focal ratio, assuming equivalent QE and general sensor specs, given a fixed arcsec/pixel sampling, SNR should be approximatively the same regardless of the specific pixel size, focal length or even aperture, right? I’m referring here to extended object no point sources (where I know aperture always win..). I guess larger pixel sensor are considered more sensitive or with better SNR than smaller ones when they are directly compared against the same optics but if instead they are arcsec/pixel matched, things should be even out, no? Please correct if I got this wrong... I haven't checked the math for this. So, anyone with practical EAA experience using small sensor like the 178, 183 or similar to confirm or disprove this? It would be nice to hear your different experiences. Just for reference I’m attaching below the equivalent focal length (mm) required to give 1.6x, 2x and 2.5x FWHM sampling given a selected FWHMs (arcsec) and three typical pixel sizes (um). I have highlighted the case for 2/2.5 arcsec FWHM. I used the usual resolution formula as focal length (mm) = pixel size (um) / resolution (arcsec) * 206.265 The first table show un-binned sensors assuming an ideal “MONO” resolution. I know 533 and 295 sensor are only colour but... The second table is 2x2 bin to emulate the bayer filter loss of resolution. I don’t think this is fully correct but it would give a sort of worse case scenario for colour cameras. Let me know if any of this make sense... Flavio
  11. Hi Toadeh, If you are looking for a cheaper but still a good option considering your equipment, I would also suggest the classic ASI224MC for about £220. It's not the latest camera but it is very sensitive colour camera and it has been one of the most used for EAA so you will find a lot of info and examples online. The 244 has a smaller field of view compared to 533 or 294 but considering your 150p with a focal length of 750mm you will get a good 1.05''/pixel resolution and a "nice frame" of 0.4x0.3 degrees zooming on most of the major bright objects. I'm thinking myself to get the 150 as "ideal" EAA travel scope. The 533 and 294 on a focal length of 750mm will probably be better for large objects or wide nebulas. Of course, you can image wide and crop later and since 533 and 224 have the same pixel size you will get the same final resolution per pixel. The 294 slighter bigger pixels with a resolution of 1.2''/pixel which is almost the same. Just to give you an idea using https://astronomy.tools as suggested above: With longer focal lengths, like the 10' f4.7 from @tteedd the 533 and 294 would probably better because the field of view is getting smaller and smaller and closer to the 244 on the 150p. Flavio
  12. Thanks Martin and Mike, The observations of remote galaxies you are showing in this forum are really impressive and clearly show that your two approaches are working very well. Aligning to a closer star and then a small goto jump to the target seems a really good idea. Now I feel like I'm cheating with my continuous use of plate solving but I guess with a much lighter mount and under the sky of London it's a way to make things a bit simpler... it may be professional deformation but the truth is I quite like the geekiness of it and that's part of the fun! 😜
  13. I agree, now that I have seen the improvement using the GOTO on EQ I can see the value of good alignment. Although I'm still not sure if just levelling with the bubble on the AZ-GTi is precise enough for AZ...That's why I'm never got mad about it. I mean the bubble is always within the marks but how good is it? On a EQ mount aligning to polar seems to me more objective and quantitative (especially using SharpCap or similar tools). If I got it right, with a good polar alignment, levelling becomes more about stability of the scope but it is not really affecting goto or tracking because RA just need to rotate around the polar axis and declination is orthogonal to RA. On the contrary, on AZ a bad levelling is directly affecting both axes. So, on simple AZ mounts like GTi what would be the best way for levelling?
  14. ...and here some old images using the 130p and AZ-GTi. This was a couple of years ago so that was just basic setting (Sharpcap, SynScan and manual focus). I'm now trying to get the SW focuser working on the 130. Hopefully that would improve things further. M51 (Gain 350, 15sec x 40 frames, SW130p + ASI224) - AZ mode and the firework galaxy NGC6946 (Gain 373, 5sec x 80 frames, SW130p + ASI224) - AZ mode
  15. Hi Keltoi, Yes, this is my main setup for most of my observations, I alternate between ED72 and 130p. Just last night I did some EAA using the 72ED + ASI224 and AZ GTi in EQ mode. It was a pleasing and fun night with everything working well. I recently added the SW focuser + HitecAstro DC controller to improve my focus and I can see the difference. My software combination is SynScan app on windows, together with Sharpcap, HitecAstro focuser software and Cartes du ciel. It is a bit overkill but the long term goal for me is to learn how to control everything remotely. As basic setup you only need Sharpcap and SynScan app. About Goto and tracking accuracy, I have to admit, with the EQ mode, once you get the polar alignment done, everything seems a bit more reliable and easy than in AZ but need to do more testing on that. Still, I have used AZ quite a bit and once aligned you can always plate solving if your goto is a bit off and automatically recenter. SharpCap plate solving is quite forgiving allowing up to (I think) 11 degrees error, which is a lot! The main issues I have found in my case is more about good tracking after a goto...and this may be due to levelling errors. I have the impression for AZ a good levelling is even more critical than in EQ. In EQ once you are aligned to the earth axis, you are all set. Here some images from yesterday, SW London, Bortle 8. Please note I'm still learning a lot about EAA myself...just to give you an idea what can be achieved with this setup. I'm sure other people can do much better than that. Yesterday I was trying to squeeze from the little ASI224 a better color balance and sharper stars. M27 (Gain 349, 5sec x 90 frames, ED72 + ASI224 + Astronomik L-3 UV+IR) - EQ mode M57 (Gain 349, 5sec x 54 frames, ED72 + ASI224 + Astronomik L-3 UV+IR) - EQ mode and M71 (Gain 299, 10.4sec x 33 frames, ED72 + ASI224 + Astronomik L-3 UV+IR) - EQ mode
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.