Jump to content

vlaiv

Members
  • Posts

    13,263
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. Premise is rather simple - take your smart phone, attach it to the telescope and eyepiece and let it take image of what you see - stack those images. Perfect way for novice astronomers to get into EEVA and possibly imaging. So I got myself phone adapter to test this idea out - and idea itself really seems to work - It is quite easy to attach a phone and take a picture of what is seen in eyepiece. Question is - is it good for EEVA and if so - will it work for novice astronomer? I'll outline my case and point out interesting parts. Equipment that I initially planned to use was: SkyMax 102 Maksutov from SW. That is 102mm F/13 scope. 32mm GSO Plossl. My phone is Xiaomi Mi A1. It has 3.8mm FL F/2.2 lens (equivalent of 29mm regular lens), 1.25µm pixel size, 4000x3000 resolution and 5mm x 3.75mm size That lens covers about 80° on diagonal, and the EP that I'll be using has 50° FOV - it will fit entirely on to the chip with some room to spare. In fact - this is what it looks like: What would be equivalent speed of such "compound lens system"? We have base speed of F/13 and we have 32mm / 3.8mm = ~ x8.421 reduction. That would make it F/1.54 - amazing! But then I realized something important - 3.8mm and F/2.2 system has effective aperture of only ~1.73mm. Our telescope + eyepiece combination is giving exit pupil of 32 / 13 = ~ 2.462mm. If we use this combination - it will actually stop our aperture to 71.7mm. Lesson 1. - We should try to match telescope exit pupil to aperture size of phone lens. Phones have very small aperture on their lens - this pretty much rules out any fast scopes for this as most of them will give small exit pupil only with very short focal length eyepieces - and that won't cover much of the sky. For example F/5 scope will need 8.5mm eyepiece to match exit pupil with my phone. We can try to mitigate above problem by being smart and using - wider field of view eyepiece. These have smaller exit pupil for same field stop / area of sky covered. Best candidate that I've found for this and my setup is 70° 20mm WA modified Erfle EP (yes there are other eyepieces like 24/82 - but those tend to be much more expensive and we are talking here about budget option / novice astronomers). Notice that exit pupil is 1.57mm with that eyepiece - less than aperture of phone lens - so that is good. What is not the best is the fact that we are using 70° eyepiece and that things get very warped at those angles. There will be some pincushion distortion and we must take care of that - stacking software needs to be aware that image has been non linearly deformed. Here comes the fun part - we seem to have it all working, but my adapter does not have "height adjustment". You can easily center phone camera onto eyepiece - but there is no way to adjust the distance between the two. Why is that important and how easy will it be? Here is handy diagram to explain what is going on: Since we have 17mm of eye relief and 70° AFOV and such a small difference between exit pupil and entrance pupil - there is really not much room for error - phone camera must be very precisely positioned. Even small movement away from ideal position will result in serious vignetting. Less than 1mm sideways can vignette edges by more than 50%. Few millimeters up and down can completely cut light from the edges of the FOV. Lesson 2 - not every phone adapter is good enough to provide you with accurate alignment of phone camera and at the moment - I can't imagine "protocol" that I would utilize to accurately place the phone. If you plan on doing EEVA with different lens and camera - bare in mind that you'll need to position them accurately and that often eyerelief quoted for eyepiece is not accurate - so you'll need to devise some method of making sure you are spot on. BTW, above image that you see - is taken with F/4 finder/guider scope and 32mm plossl - giving 8mm exit pupil. Even with such large exit pupil - I think left side has more vignetting than right since I did not center lens at exact sensor center - and I feel that phone was too close to the EP. 32mm Plossl has large eye relief. I still think that afocal method is viable option for EEVA - but not with phone camera. Dedicated astronomy camera like ASI178 and matching c-mount lens could be an option. With 32mm Plossl and 12mm c-mount lens - this gives nice combination and lens only needs to operate on F/4 to be able to let in complete exit pupil. @Thalestris24 - this is related to afocal method - not imaging per se, but you might find it interesting - exit pupil matching is not something that we immediately think of when considering afocal method.
  2. Here is simple method of doing it (at least I think it is simple). Requires way to measure distance and white circle on black background. Test is conducted by standing in front of the wall with circle on it at some distance so that circle is approximately the size of quoted FOV. Now take eyepiece and place it in front of one of your eyes - as if you are observing thru it. You should see field stop of the eyepiece and out of focus light in FOV. Keep you other eye open and move forward/backward until two circles overlap. Measure distance from your eye to circle on the wall and use diameter of circle to calculate angular diameter of circle at that distance. This technique can be used to compare fields of view of two eyepieces. You don't need circle - just a blank white wall or any other background that will illuminate FOV properly. Place each eyepiece on respective eye like when binoviewing. You should be able to instantly tell which FOV is bigger since circles will overlap but have different sizes (or same size if FOVs are equal).
  3. There is well known equation that goes: Money in wallet = 1 / Number of scopes
  4. Indeed - 32bit floating point is actually quite enough for almost any astro related processing. I advocate converting to 32bit format as soon as processing starts - early in calibration and keeping that all the way to the end. At one point, I was afraid that even 32bit format is not going to be enough with modern CMOS sensors and plenty of short exposures - and that would be true for 32bit fixed point format - luckily floating point format, although it uses only 24bits of precision, is more than enough, as it can store vast ranges between numbers (32bit fixed point has more precision bits but dynamic range on set of numbers is also limited to 32bits - not so with floating point).
  5. You are right - but selection of the storage / transport format should not be considered part of stacking process.
  6. Go to settings (cog icon next to CC in bottom right corner) - subtitles and auto generated Russian will be selected, but there is also auto-translate option below and then you can choose language.
  7. There is a option to turn on auto captioning with auto translation. Helps understand what is going on a bit.
  8. Belt mod. As far as I can tell from the graph - you have fast oscillating component of about 10-15s in RA that is causing most of the issues. I just looked it up - stepper tooth period is 10.2s for EQ6 Either get belt mod, or try to adjust gear meshing in RA axis or maybe look into predictive guiding algorithm in PHD2. Not sure if it is part of stable release, but I do remember it being in dev / beta test at some point. Algorithm tries to "sense" fast components and then does pre-emptive corrections. BTW - 0.8" RMS is to be expected for stock EQ6. If you get that oscillation sorted - you'll be able to bring it down to say 0.5" RMS (if both DEC and RA are say 0.32" then total will be 0.45")
  9. I just realized I made mistake in above calculation. 5.5mm EP and 3.8mm camera lens will give smaller image by factor of ~1.44737 and not larger! Eyepiece and camera lens here form relay lens pair. If both are equal focal length - they produce no magnification. If first is shorter FL than the second - then they produce magnification. Unfortunately this means that phone camera is bad choice for testing telescopes as phone cameras have very short focal length lens (on the other hand - it is a good choice for EEVA as such combination will act as powerful focal reducer). Above is still doable - but I need regular lens of about 50 or so mm and different attachment system. Back to the drawing board.
  10. I have Heq5 and AzGti. Both could in principle be used to slew mount just a bit and take two successive images. Then I would calculate pixel shift and divide that by angle that mounts tell me. I wonder how accurate that would be given backlash and stepper resolution? This is not a problem for prime focus setup - pixel scale is not that large and I can slew for example - 10-20 arc minutes and still have reference in the frame. I wonder if I use "very high magnification" - or rather high pixel scale needed to capture MTF? Say I use 5.5mm eyepiece with my mobile phone. My phone is Xiaomi Mi A1 and it has 26mm (equivalent?) lens and 1.25µm pixel size. I say that it has equivalent lens since no way they packed 26mm FL lens in there. Sensor has size of ~1/2.9" so actual focal length of lens would be 3.8mm? In any case, that would mean that 1300mm x 5.5 / 3.8 = ~1900mm at 1.25µm pixel size. That is 0.14"/px and for 4000px in width of sensor that is 560" or 9.333' - well, not bad. If my calculation is ok - then yep, I could use mount to measure pixel scale. Good point - I'll have to try that. I just need to tell mount to slew 5 arc minutes and see how much image is shifted in pixels.
  11. By the way - I checked that link @andrew s provided and it is quite sensible option. In fact - if we can get many tiny lenses - I think we would be able to build one . Not sure what the smallest available lenses are but if we could fit say about 100 over the span of 2" filter - well, one would need just a barlow and regular camera to their own test. Granted - profile of the barlow must be known for this to work My greatest concern is collimation lens. Wavefront needs to be collimated properly and in configuration they are presenting on their website - where wavefront sensing is done close to focal plane - you have to match F/ratio of the scope used. How much aberration will collimation lens introduce?
  12. I would not call it quite dead yet. Maybe a bit stalled at the moment. Got myself a phone adapter so I can do afocal imaging with phone. Guess that is probably simplest way for people to conduct any sort of test. It's not going to be very accurate test since we don't know: 1. eyepiece MTF 2. phone camera / camera lens MTF At this point I'm not sure if I properly understand what sort of influence it might have on result. I also want to get c-mount lens to attach on my astro cameras so I can use that instead. Just a few moments ago, I realized there is this great thing called - step up / step down rings and there is also M52/T2 adapter to be purchased - which means I can connect proper lens to scope. I'll need good laser range finder so I can accurately measure angular sizes. All of these don't cost much, but it does add up and I'm in situation where I have to really save up (my new house is going to be finished in few months and I'll need to move and purchase all the new stuff I won't be moving), so I'm not really sure when I'll get all the bits that I need for this. To be honest, I'm sort of depressed by the fact that WinRoddier test using my synthetic data is not working properly - and I can't figure out why - is it my data or software itself. At the moment - there is no real benchmark that we can use to compare results. I could write software that does similar analysis as WinRoddier - probably using different approach, since the way I see it - you don't have to have exact defocus for in/out images using my approach and you don't have to limit data set to just one set of in/out images. Not sure how fast it would work since it would do brute force search across Zernike polynomial space (well - not brute force - more like gradient descent along each coordinate since Zernike polynomials form orthonormal set). This means possibly thousands of FFTs per fitting. That can't be very fast . But again - I don't have much time at the moment to be able to do that, and even if I did - how would we know if it provides good results? What would be our benchmark to test against?
  13. You are absolutely right. I don't know how that happened. I entered digits into calculator and pressed something (maybe *2 instead of to the power of 2?) and I was not paying much attention to the figure I got.
  14. To add to what's been said - I find that otherwise comfortable eyepieces cause me similar issue with Mak102. This is because of magnifying secondary mirror being so close (so I've been told) - which extends eye relief of eyepieces further - similar to what barlow does. 32mm Plossl is very comfortable eyepiece otherwise when used in different scopes - but in Mak it suffers from two issues - first is slight vignetting - Mak102 has slightly smaller fully illuminated field and can't use max field stop of 1.25" EP - this means 32mm plossl and 26mm ES62 will have this slight issue. Second is of course very long eye relief. Upside is that eyepieces which have shorter eye relief in Mak feel more comfortable to use.
  15. Well - you are right, from that distance it is very hard to tell as both mounts look very similar: except for the size - and I'm not sure if there is any way we could figure out which one is it - so it might as well be EQ6.
  16. I bet you did not notice this beauty in the same scene: and, no - that is not Dob - it is Skywatcher 200p on Heq5 mount: Indeed - it is pointing "the wrong way" - but I would also point the scope like that when not in use - when you point it up - dust settles down on the mirror - this way it can't do that. For anyone wishing to watch the show - in my view - it is a very good show and worth the time.
  17. Maybe look at guiding resolution vs mount rather than guiding resolution vs imaging resolution. What mount will you be guiding? What is guide RMS that you realistically hope to achieve with that mount? Say you want to guide HEQ5 and you think your model is capable of doing 0.8" RMS (for tuned stock model this is not unrealistic). In order to accurately measure such guide error - you need to have your centroid accuracy at about 1/3-1/4 or less of that. You want your centroid accuracy to be about 0.2". Centroid algorithms can get accuracy down to 1/16 of a single pixel so your guiding sampling rate should be 16 * 0.2 = 3.2"/px Say you are using ASI120 (or any other guide camera with 3.75µm pixel size) - you need about 250mm of focal length. Key points here 1/4 of expected best RMS and x16 to get sampling rate - which we can simplify to: sampling rate should be about x4 of expected guide RMS.
  18. Addition vs average is going to differ only by multiplicative constant. Average / mean is really sum divided by number of samples. SNR does not change when you multiply everything with a constant since SNR is ratio (A/B = A*C / B*C) Median should really be used with larger number of samples and even then Kappa-Sigma clip is better way of dealing with outliers.
  19. That routine is proper calibration and works well even if amp glow is present with CMOS sensors. I would advocate everyone use it (only drawback is no dark scaling or dark optimization - but those are not needed if temperature and exposure lengths are matched).
  20. Well, everyone should calibrate their screen if they want to properly view images on it. Operating systems have routine for simple screen calibration - color balance, temperature and gamma. It should be enough to get decent calibration and relatively similar looking image. Professionals that work in graphic design do their calibration with special devices that measure light coming from screen and adjust that for true (or close to true) color rendition.
  21. Indeed - classical F/stop rule does not apply here as we are deep into - "any noise source is high enough to cause issues" territory , but then again - depends on how you look at it. Say that you use 0.8 reducer, and that actual SNR improvement is only 1/3 of what F/stop rule would suggest. Increase in pixel area is (1/.8)^2 = 1.25^2 = 2.5 - so signal increases x2.5 times, or additional 150% of original value. Say that we only get 50% of original signal value - that is like imaging for 50% more time. If you image for 2 hours - than this is not really drastic - you add another 1h to get 50% more imaging time. If you image for two days and need to add another day - well, it is becoming significant now, right? Two vs three days spent on target. I think that best approach to speed is - set your working resolution and then throw as much aperture at it as you can By the way, reducers are not the only way to manipulate with working resolution - there is binning also. Problem is that you are using OSC sensor and in the start you are effectively using twice as low sampling rate as you think you are using. This is because of bayer matrix on the sensor and the fact that pixels are in spaced by factor of two (R-G-R-G-... and G-B-G-B-....) - so in fact red is every other pixel in X and Y direction - so is blue, and if you look at green being G1 and G2 - then those are spaced every other pixel too. In process of reconstruction of the image - missing pixel values are interpolated - and that means that if you bin your image 2x2 - you won't get the same result as when you bin mono data. There will be some improvement but not what one would expect from 2x2 bin on non interpolated data. Indeed. I think that we often over sample without realizing it. You can check how much you are oversampling by measuring FWHM of stars in your image. Proper sampling rate is about x1.6 less than FWHM. If you FWHM is say 3.2" then good sampling rate for that resolution is 2"/px. Here is general guide line of scope aperture vs resolution: 100mm aperture in regular seeing with regular mount will be capable of doing 1.6-2"/px (there is quite a bit of variation at this end - depending on how good the mount and seeing is) scopes in 80-72mm range are going to be wide field instruments with 2-3"/px 150mm of aperture is going to manage 1.4"-1.6" 200mm is needed if you want to attempt resolutions of about 1.2" I don't really think anyone can properly manage resolutions smaller than 1"/px so I would put limit there. If you are at 1.1"/px or lower - 99% chance you are over sampling. Here is an example of that - I took this image with my 8" RC - and seeing just did not play ball that evening: This is at 1"/px - image looks soft there and in fact - actual resolution of the image is closer to 2"/px - and can look like this: There is nothing in above image that can't be seen here - and here it looks better (higher SNR) and sharper because it is properly sampled. Most people confuse sampling rate with FOV and zoom level. I advocate that image should look good when viewed 1:1 or 100% zoom rather than screen size. Most people over sample but due to fact that they use cameras that have say 4000x3000px and viewing those images on screens that have 1920x1080 or so pixels - images get resampled down x2 - and resolution seems fine in that case - as you get x2 reduction - but you loose SNR that way. If one was to sample at proper resolution - it would result in SNR improvement of x2 as well as sharp image. Hope that makes sense?
  22. Sure, I'm 100% for proper calibration. In this case - darks matching lights in every respect (temp, time, gain, offset, ....) and flat darks matching flats in the same way. Do be careful to use appropriate algorithm when mixing long and short subs as they don't have equal SNR and simple average (or sigma reject) is not the best way to do it. PI offers weighted average - so go with that if you use PI. From quick inspection - I'd say, check connection to the camera. My guess is that you are not using threaded connection or there is some play in focuser. Try to compare first and last sub of the session for star elongation in the corners. If they are the same - you have some sort of tilt issue - but if they are different - then you have some play in your connection (gravity will tilt stuff in different direction depending on where scope is pointing and it is pointing in different parts of the sky in the start and in the end of the session).
  23. It is just because data is not color balanced / color transformed like in DSLR - which has this transform applied by default (although you can also get pure raw data from DSLR as well - and it will also look funny).
  24. That is perfectly fine histogram for such restrictive filter. Most of the pixels in the image are background or faint parts of target - these have very low value per exposure. This is why we have hundreds of exposures. Correct exposure time is calculated based on noise in the image. With NB imaging - read noise becomes dominant factor - but with modern CMOS cameras that already have low read noise - that is questionable (LP can easily swamp read noise even with NB filters depending on FWHM of filters). In any case - one should not obsess too much about exposure length as difference can be small (and often is when we approach point of diminishing returns).
  25. Don't know about UK since it is recent thing - but TS does remove VAT when exporting outside of EU. In fact - I see their prices listed without VAT.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.