Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

Deflavio

Members
  • Posts

    44
  • Joined

  • Last visited

Everything posted by Deflavio

  1. Interesting experiment, why not to increase photons by binning up a bit? Flavio
  2. When you say "it seems like the mount points, stops, waits and then moves" does the noise from the mount motors also stops, changes in intensity or stays the same? I'm just checking if some of your gear is slipping, maybe two years without use may have loosened something in your gears. Other possibility, what is the voltage range for this mount? You may want to make sure you are not just close to the limit. I suggest to post in the equipment/mount section. I'm sure you will get more answers. Flavio
  3. Eh, that would be really nice. This could be the base for a full EAA/AP simulator ..including noise, optics and sky, etc. Ok, I'm dreaming now but I saw some time ago Aberrator software. It was nice to simulate the effect of optics on planets and doubles, no noise though. Regarding SNR, I think it should be possible to adapt the point source algorithm to extended object by changing how the number of photon are computed for the pixel, the rest hopefully should be similar. I'll try to get more reading on it. Flavio
  4. Hi Martin, I think the reason Borg+Lodestart results are strange for FWHM 2 and 3 is because you are well past the 0.83xFWHM threshold (i.e. when you are under-sampled but SNR is maximum according to Raab's paper) and your pixel is now bigger than the whole star so you don't have benefit by going bigger than that. Specifically with FWHM=2,3,4 you are sampling at 0.44x, 0.66x and 0.89x FWHM. These results replicate pretty well what said in the paper.[Just checking, FWHM =2 and 3 seem the same plot but I can follow the pattern on the rest]. Also it is interesting that by using larger FWHMs (e.g. 4,5) overall SNR decrease across all configurations because we are losing more photons from the central pixel (which is what this algorithm is only taking into account) but more or less the Borg still seems to maintain the same SNR (~8/6 at t=300sec) or with only just a slightly decrease. This is because is still able to mostly swallow the whole star. I think these results are really useful to calculate pixel-SNR or the limit magnitude but now I'm more convinced they don't tell the whole story for extended objects. In the examples above the calculation of the central pixel is ignoring what is happening to the surrounding pixels. When we have a single point source and increasingly larger FWHM we spread photons more and lose them outside the central pixel and SNR is going down. That's fine. In the presence of extended objects with increasingly larger FWHM, photons are spread/blurred from all voxels but now if I consider a pixel within this extended object it will both spread photons but also receive for surrounding voxels. SNR behaviour of extended objects is different than a point source. In a way, I see now, this is a very convoluted way to explain blurring. 😅 F
  5. Thanks Martin! I think your last message finally cleared my doubts. SNR is aperture, exposure and QE, yep I fully agree with this. What got me confused was the story of the "equivalent" time required by equivalent f-ratio systems I read around. I see now, this works IF I don't change pixel size. Like a photographer changing lenses but keeping the same camera body. Longer focal lengths give bigger pictures on the focal plane spreading more the photons on a larger surface but density will stay the same for constant focal-ratio. So yes, by using the same pixel size, you can get same SNR with smaller apertures but you have to trade your resolution by using shorter focal lengths...As you said in your point 4. On the contrary, if we want keep the same resolution (using smaller pixels) we have to sacrifice SNR or increase exposure. Great now all fits. So, going back at the original post, small pixel sensors can help to achieve higher resolutions with shorter focal lenses but at the price of SNR or exposure time. For an ideal travel EAA setup, I guess it all comes down how much longer I'm ok to integrate. Any quick way to calculate how longer exposure would be compared to a reference scope? Ps. Why FOV? I'm thinking about still relative small objects like galaxies and planetary nebulas...
  6. I have not used myself ASIair but looking at few videos on youtube it seems that all you need to get your "evoscope" functionalities is just install the ASIair, connect mount and camera(s) and install the app on your ipad. Looking at screenshots the equivalent of ASIstudio is somehow already built inside ASIair and the app so no need to install on any computer. That's in theory, but in practice I would make sure that your mount is working fine with ASIAIR and that you wifi signal is good enough! Also you may want to consider a ZWO focuser for this to be completely automatic. Regarding StarSense, I guess you might be able to align first with Starsense and then sync the AsiAir but you want to check that is possible. You can ask someone here using ASiair if I can sync to existing had-controller status or maybe ask directly ZWO about StarSense... F
  7. Hi Mike, For 1.5x FWHM I mean a sampling rate that is 1.5 times higher than the actual Full Width Half Maximum of your final "star disk". This star disk would be the combined effects of seeing (main factor), Airy disk and tracking errors. In practice, this means that you need to cover the central part (or most luminous portion) of your star with at least 1.5 pixels. In the tables I posted above, I just took 8 different FWHMs to represent different sky conditions or star sizes and tried to see what would be the ideal focal length given 3 different pixel sizes. If I look at your set up, a C11 at f6.3 on a 6.45um pixel and 2x2 binning gives a sampling resolution of 1.51 arcsec/pixel. The 15'' at f4.5 gives a similar 1.55 arcsec/pixel and they seem really good sampling. If we consider as "typical" sky condition or star shapes FWHM of 2 to 3 arcsec you are nicely going from 1.3x to 2x sampling rate in most of the cases. Interesting that you see a difference between your two setups. Could it be just the better SNR helping revealing more details faster or maybe the different obstruction in the scopes reducing slightly contrast on C11? F
  8. wow that's a lot of hours in a year. I like to do the "theory" but I definitely do less practice 😅 I completely agree regarding bigger pixels and seeing condition. Given a fixed focal length (and aperture) the larger sensor would go always deeper. However, what about details and object features/contrast? As you say, average or bad sky conditions would actually push to use binning but the same angular sampling can be achieved using a shorter focal length. Even more if I also consider a smaller pixel size. I'm curious to know if someone has tried this. Clearly the C11 or 15’’ will go deeper but what about objects with moderate brightness? Will be details there comparable if sampling resolution is the same? SNR will be lower but as I said I can wait a bit more if more features are emerging... F
  9. Hi Martin, Very interesting paper, it covers quite lot. Interesting that max SNR is achieved at 0.83x FWHM assuming a centred voxel...and accepting bad under-sampling and aliasing of course, but at least, no need to go further than that for more SNR. It also confirms that to preserve details and still have good SNR, sampling should be between 1.5 to 2.0x FWHM which is in line with what has been said before on this and other forums. Unfortunately, the formula and the script are as you said for point sources. At fix angular resolution (pixel value in your python script) SNR goes up with aperture. That is what I would expect for detecting fainter stars or how to reach deeper magnitudes... we need bigger apertures. This is probably the right definition for pixel-SNR, I should use. Probably "SNR for extended object" is not the right term for what I have in mind. I found quite a few "heated" discussion around on this issue, about SNR and the f-ratio myth and I may be confused... In short, if I keep the f-ratio constant, by increasing both aperture and focal length at the same time, I would get the same surface brightness on my image simply because the increased photons for the large aperture are spread on an equivalent a larger surface on the focal plane by the longer focal length. It makes sense. What I'm not sure is if I have the same surface brightness, do I have also same contrast or ability to see minute features within extended objects? Also, to have this same brightness do I need to fix to same pixel size or the same angular resolution? I'm bit stuck on this 🤔. I guess my main point of the post is, if I use a smaller pixel AND a shorter focal length how close I can get to results obtained to larger pixels on longer focal length? It seems I can match resolution, not SNR but same surface brightness...and details? Regarding to the other points, I agree, I would say time of all parameters is the one I can be more relaxed with. If an object is interesting I don't time how long to stay on it. I'll be happy to trade a bit more averaging if I can just pull out more features. F
  10. Hi Everyone, Sorry for the long post but it's cloudy outside...😅 I have started thinking about different options on how to optimise/improve resolution while still using moderate focal lengths for travel. At the end, it seems that it all comes down to the final arcsec/pixel value and a good sampling of the final FWHM (combining seeing, AIRY disk and tracking errors). A sampling rate between 1.6x to 2x of the FWHM seems “ideal” according recent posts from @vlaiv. Or maybe even less considering the more noisy nature EAA images compared to AP. I guess many high frequency details are probably lost anyway in the noise. Regardless, it seems then that smaller pixel sensors matched with moderate to short focal lengths would be an interesting combination giving the same sampling resolution of larger sensor matched with longer focal lengths. Pros are obvious: shorter focal length scopes are easier to handle, transport and save weight. Also, the use of smaller pixel sizes can be a way to minimise the intrinsic loss or resolution from the Bayer filter of colour cameras. I.e. a 2.4um sensor colour would have “at least” the same resolution as a 4.8 um mono sensor. Finally, if the FWHM of the night is not good enough a smaller pixel can also allow a wider range of binning combinations to better match the best resolution and possibly boosting SNR, no? What are the cons then? I assume that smaller pixel size also means optics quality and focus issues are more evident when working un-binned under good/excellent seeing. Being this setup for travel then there may be more chances to end up under a good sky… Also I guess tracking errors should be at the same level of setups using bigger sensors and longer focal lengths, right? So lighter equipment yes but still good tracking requirements. To be honest I’m not sure how critical this is for EAA with 5/10 secs subs but I should at least try to quantify how much error is ok. No idea how, though... Any other cons? Maybe SNR? Not sure but, if I have got this right (again not really sure), this should also not be an issue. With equivalent scope focal ratio, assuming equivalent QE and general sensor specs, given a fixed arcsec/pixel sampling, SNR should be approximatively the same regardless of the specific pixel size, focal length or even aperture, right? I’m referring here to extended object no point sources (where I know aperture always win..). I guess larger pixel sensor are considered more sensitive or with better SNR than smaller ones when they are directly compared against the same optics but if instead they are arcsec/pixel matched, things should be even out, no? Please correct if I got this wrong... I haven't checked the math for this. So, anyone with practical EAA experience using small sensor like the 178, 183 or similar to confirm or disprove this? It would be nice to hear your different experiences. Just for reference I’m attaching below the equivalent focal length (mm) required to give 1.6x, 2x and 2.5x FWHM sampling given a selected FWHMs (arcsec) and three typical pixel sizes (um). I have highlighted the case for 2/2.5 arcsec FWHM. I used the usual resolution formula as focal length (mm) = pixel size (um) / resolution (arcsec) * 206.265 The first table show un-binned sensors assuming an ideal “MONO” resolution. I know 533 and 295 sensor are only colour but... The second table is 2x2 bin to emulate the bayer filter loss of resolution. I don’t think this is fully correct but it would give a sort of worse case scenario for colour cameras. Let me know if any of this make sense... Flavio
  11. Hi Toadeh, If you are looking for a cheaper but still a good option considering your equipment, I would also suggest the classic ASI224MC for about £220. It's not the latest camera but it is very sensitive colour camera and it has been one of the most used for EAA so you will find a lot of info and examples online. The 244 has a smaller field of view compared to 533 or 294 but considering your 150p with a focal length of 750mm you will get a good 1.05''/pixel resolution and a "nice frame" of 0.4x0.3 degrees zooming on most of the major bright objects. I'm thinking myself to get the 150 as "ideal" EAA travel scope. The 533 and 294 on a focal length of 750mm will probably be better for large objects or wide nebulas. Of course, you can image wide and crop later and since 533 and 224 have the same pixel size you will get the same final resolution per pixel. The 294 slighter bigger pixels with a resolution of 1.2''/pixel which is almost the same. Just to give you an idea using https://astronomy.tools as suggested above: With longer focal lengths, like the 10' f4.7 from @tteedd the 533 and 294 would probably better because the field of view is getting smaller and smaller and closer to the 244 on the 150p. Flavio
  12. Thanks Martin and Mike, The observations of remote galaxies you are showing in this forum are really impressive and clearly show that your two approaches are working very well. Aligning to a closer star and then a small goto jump to the target seems a really good idea. Now I feel like I'm cheating with my continuous use of plate solving but I guess with a much lighter mount and under the sky of London it's a way to make things a bit simpler... it may be professional deformation but the truth is I quite like the geekiness of it and that's part of the fun! 😜
  13. I agree, now that I have seen the improvement using the GOTO on EQ I can see the value of good alignment. Although I'm still not sure if just levelling with the bubble on the AZ-GTi is precise enough for AZ...That's why I'm never got mad about it. I mean the bubble is always within the marks but how good is it? On a EQ mount aligning to polar seems to me more objective and quantitative (especially using SharpCap or similar tools). If I got it right, with a good polar alignment, levelling becomes more about stability of the scope but it is not really affecting goto or tracking because RA just need to rotate around the polar axis and declination is orthogonal to RA. On the contrary, on AZ a bad levelling is directly affecting both axes. So, on simple AZ mounts like GTi what would be the best way for levelling?
  14. ...and here some old images using the 130p and AZ-GTi. This was a couple of years ago so that was just basic setting (Sharpcap, SynScan and manual focus). I'm now trying to get the SW focuser working on the 130. Hopefully that would improve things further. M51 (Gain 350, 15sec x 40 frames, SW130p + ASI224) - AZ mode and the firework galaxy NGC6946 (Gain 373, 5sec x 80 frames, SW130p + ASI224) - AZ mode
  15. Hi Keltoi, Yes, this is my main setup for most of my observations, I alternate between ED72 and 130p. Just last night I did some EAA using the 72ED + ASI224 and AZ GTi in EQ mode. It was a pleasing and fun night with everything working well. I recently added the SW focuser + HitecAstro DC controller to improve my focus and I can see the difference. My software combination is SynScan app on windows, together with Sharpcap, HitecAstro focuser software and Cartes du ciel. It is a bit overkill but the long term goal for me is to learn how to control everything remotely. As basic setup you only need Sharpcap and SynScan app. About Goto and tracking accuracy, I have to admit, with the EQ mode, once you get the polar alignment done, everything seems a bit more reliable and easy than in AZ but need to do more testing on that. Still, I have used AZ quite a bit and once aligned you can always plate solving if your goto is a bit off and automatically recenter. SharpCap plate solving is quite forgiving allowing up to (I think) 11 degrees error, which is a lot! The main issues I have found in my case is more about good tracking after a goto...and this may be due to levelling errors. I have the impression for AZ a good levelling is even more critical than in EQ. In EQ once you are aligned to the earth axis, you are all set. Here some images from yesterday, SW London, Bortle 8. Please note I'm still learning a lot about EAA myself...just to give you an idea what can be achieved with this setup. I'm sure other people can do much better than that. Yesterday I was trying to squeeze from the little ASI224 a better color balance and sharper stars. M27 (Gain 349, 5sec x 90 frames, ED72 + ASI224 + Astronomik L-3 UV+IR) - EQ mode M57 (Gain 349, 5sec x 54 frames, ED72 + ASI224 + Astronomik L-3 UV+IR) - EQ mode and M71 (Gain 299, 10.4sec x 33 frames, ED72 + ASI224 + Astronomik L-3 UV+IR) - EQ mode
  16. Sorry not sure if I should open a separate post for this but while talking of different spiders... just out of curiosity how the diffraction spikes from a "double - 4 spider vanes" would look like ? On one side, I see from teleskop-express the "double spider" can help with focusing somehow like a bahtinov mask but without the need to remove it. That would be useful but, is it true? https://www.teleskop-express.de/shop/product_info.php/info/p6013_TS-Optics-Carbon-Double-Spider-for-225-240mm-inside-tube-diameter.html On the other, digging past posts using this spider, I also see that the diffraction spikes are also "dashed". Someone mentioned the effect of NB filters but and I can see the same pattern on a colour camera. See these two posts: So are these double vanes the cause for the interference pattern? Maybe due to not perfect focus 🤔 ??? Sorry to go a bit OT but I got intrigued by the possibility of using spiders for improving focus. F
  17. Hi Bobby, My setting is similar to yours, ASI224 + SW72ED or SW130p on a AZGTI. I often use plate solving from SharpCap to refine my gotos and to adjust the alignment (plate solving and adjusting before "confirming" the star aligned in the Synscan Pro PCc app...). It is quick and works. Maybe if I better level the mount before or play longer with start alignment I may need less plate solving later, don't know... The truth is I usually want to get to a target quickly and to be completely honest, I really enjoy to see the magic of plate solving in action and see the object sliding into (the rather small) view of my camera. In AZ mode with a quick and not perfect alignment I can still do EAA, stacking 5 to 10 secs subs for few minutes without problem. I'm about to try the EQ mode of the GTI, maybe things are getting even better, let's see.🤞
  18. I have the 130ps AZ Gti bundle. I’m doing mostly EAA and I’m very happy with it. One thing I’m not entirely happy is the focuser of the 130ps but if you go for the 130pds you will have everything sorted... Flavio
  19. Ok, so the 15mm is ok in terms of vignette at reduction 0.68 but it may be limited as soon as I go up in reductions. For reference I put here the field stop of few plossl so people can compare with their own sensors. From the TV website we have: 15 mm = 12.6 20 mm = 17.1 25 mm = 21.2 32 mm = 27 40 mm = 27 I guess, the 15mm is out with reductions higher than 0.5 with a 6mm sensor. Also there is no obvious benefit in 40 vs 32mm... I had the 40 for an old Mak, now I just realised on both 130ps and 72ed its exit pupil is really off and I’m getting the secondary in the view on the reflector. Does the exit pupil value also affect the EP projection somehow or just the field stop? I should probably move then to the SW 25mm...although now I’m tempted to get a TV 25mm, eheh. About the blurring on the 15mm, well, that’s probably me. I was more interest to show the coma and with a windy day yesterday focus was moving a bit. About focusing, the 130ps has a very simple focuser not dual speed. I do have a baader T2/1.25 elliptical focuser that may help a bit. No coma corrector sorry. One question, in your calculations for the coma you say coma free circle is only 2.8mm but on my usual EAA sessions I don’t see obvious distortions with the 224 while now on 0.65 reduction they are very obvious from already 1/4 of the size the sensor and will be probably more with stars...is this just coma from the reflector? On the 40 mm, the image is all bad but I don’t think I see this increasing from the centre, no? Anyway, now that I know how to play with extensions, going back to the 72ED may actually avoid both coma and focuser issues. Let’s see if I can do something between today and tomorrow, after that I will be traveling for work for a couple of weeks...
  20. Ok, more testing this morning and I manage to get the proper EP working with quite strong reductions with different eyepiece. Interesting results but image quality is not there yet... Here a quick summary: @vlaiv as you mentioned by going with eyepieces with shorter focal lengths the extension are more manageable. However it seems I'm getting quite a bit of distortions (coma?) even at 0.68 reduction. I got similar distortion with a SW plossl 25mm but I'm not sure I good is this eyepiece. Optically my TV 15mm seems really good... Going back again to the long TV 40mm I can get EP by adding more spaces between the camara and the EP projection adapter. I have to admit the total extension is quite embarrassing and I'm pretty sure I may have some bending going on. Still, with the 40mm I think image quality seems more uniform although more fuzzy/blurrier. I tried but can't get a better focus...either because is very short or image quality is poor. Unfortunately I don't have a 32mm to test... I may try next the 40mm with less reduction and see if image quality improves.
  21. Well, at least with video astronomy, live stacking and NV we have so much choice and that’s the nice thing of this dynamic hobby ? I’m sure we can debate more how NV fits inside or outside the general umbrella of EAA. However, to be honest if people doing NV feel they are doing more of a “pseudo”-visual approach... (with no negative connotation intended) well at this stage, I don’t have a problem with it or better I don’t think I should decide on that but that’s just me. Where I think things get confused is with this new general definition of EEVA... the term visual is not fitting for me with live stacking and video astronomy techniques and opens the door for more confusion about visual observations vs the broad meaning of observation.... is it just me or is this a common impression?
  22. I think there is a terminology problem here. On one side peolple are considering observation = visual observation because that's looking directly at real photons or the other side other people thinks more along the way of observation = scientific observation, in the sense regardless of the tool (visual/NV/EEA,etc) the key point is to be able to observe something without worrying about perfect image/view either technically or artistically...just yesterday people on both CN and SGL confirmed a supernova in M100 looking back at some posted images. I'm biased here but for me this is a good and solid scientific observation that could have been done on any tool or visually... I think we can debate, agree or disagree on this all the time but I have the feeling we may never reach an agreement here because everyone is coming from different directions and experiences. Nothing wrong with that. Maybe we should try first to see if we can find a common ground where we all agree on something. Let me ask a couple of questions because I'm still fairly new here and I would like to understand the general feeling about EEVA: 1) Do you want to associate the term "visual" to EAA/live stacking observations? 2) Do you want to associate the term "visual" to Video Astronomy observations? I personally would say no. How many people disagree on this? Do you prefer to keep the "visual" badge also for EAA? I'm asking this because I don't fully understand how/why the new name of this section. So we are are doing EAA/live stacking and we are calling this section of the forum Electronically Enhance Visual Astronomy? I think EAA can embrace both live stacking and video astronomy and I have no problem to admit that these observations are not "visual" in the "visual observation" sense but still genuine observations are. Reading back at the start of the post I got the impression that probably everything was done to make things more inclusive to NV, which I think is a really cool technique. Asking the same question about "visual" and NV: 3) Do you want to associate the term "visual" to Night Vision? I'm more likely to say, yes, but we can debate pro and cons. Nevertheless, given just the name "Night Vision" I would say, there is no doubt that NV can be considered way more visual than video or EAA, no? So if we split the concept of "visual" from "observation" I think we may have a way forward. My suggestion is to simply go more with a name like "Electronically Enhance Astronomy and Night Vision" . In this way, we are inclusive but also we keep the full identity of everyone and easy to find/search... Also, last comment, EAA and NV have now picked up almost everywhere on the web and beyond so going with a different name like EEVA for everything may be also damaging. This is my personal view and I hope to have not offended and challenged anyone...
  23. Thanks @vlaiv, that's really helpful. Now I understand the "proper" EP much better. I don't think I have enough extensions to reach that distances on the 72ED but yes I can try with a shorter eyepiece and maybe using my 130p instead. Since it has already a deep focus that would make a better use of the extensions. In the worst case... I'll get more ?
  24. Here the results of a quick test I did this afternoon... Setup: SW Evostar 72ED ASI224 MC Televue Plossl 40mm TS eyepiece projection for eyepiece 30-42mm as suggested by @vlaiv several M42 extenders and here some results... Left: Exposures=0.000168s gain=234(auto), Right: Exposures=0.000168s gain=0 Ok, I'm not entirely sure how I shall calculate the reduction but just looking at the scaling between the different tree branches, I think I'm getting something close to 0.294 ?. So if the scope is f5.8, does it mean I'm running at ...ahem... f1.7 !? ? Is this right or I'm missing something here? More details about the test: Eyepiece projection was in "prime focus" mode as discussed above in this discussion. Unfortunately, right image is taken with eyepiece/camera handheld because I can only get focus removing all 1.25'' adapters. More thinking is required here as focus is going very deep inside. If I "ignore" the blurriness (out of focus) and considering also that I was clearly off-axis when I took the picture, I think distortions are not too bad... The camera/sensor was as close as possible at the eyepiece as allowed by the EP adaptor and the sensor position. I think ~10/12mm but need to check. I tried to increase the distance just adding a T-mount UV-IR filter or other small spacers but I could not get focus anymore. I guess it went further inside the tube. Unfortunately, I didn't manage to get a bright image of the "proper EP" configuration. Working on it but it seems there is a minimal distance between eyepiece and camera below which I can't get any focus. I need to look more into the theory of EP projection and where/how the focus is moving... Anyway, regardless of the final reduction I'm getting, I think EP reduction is definitely something worth experimenting more...
  25. Does it mean, the shutter keeps clicking or can you raise the mirror and keep shutter open? I got unconfirmed rumours that some cameras may be able to "convert" them in a more traditional astrocamera "mode" by using a software shutter. I think the very expensive Nikon850 may do that, mine of course not. Not sure about Canon systems. Maybe mirrorless?
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.