Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

vlaiv

Members
  • Posts

    13,106
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. Where did you read that info about filters? From ZWO website: Source page: https://astronomy-imaging-camera.com/product/zwo-efw-5-x-2″or-7-x-2″ As far as I know 6mm is lowest profile cell that you can find and Baader filter use that type of cell.
  2. Pixel size here does not play very important part - except the fact that you needed to use x2 PowerMate which contains 4 glass elements and at least 4 glass/air surfaces more. It could lead to few percent light loss - very small impact on SNR. What does play a part here is 1) possible FPS 2) read noise 3) QE Now, lets consider how much FPS do we need for planetary in the first place? Although there are cameras capable of 500+ fps, would we use it like that when imaging planets? It turns out that for most conditions (scopes, sky conditions) for amateur astronomers, coherence time is something like 5-6ms and up to 10ms if you are lucky. This translates into 200fps - 166fps and 100fps if you are lucky. It is not camera FPS that is limiting, it is exposure length. ASI290 with its 184fps will have rather small frame loss with 5ms exposure and none with 6ms or higher. 300fps of ASI174 is overkill for planetary imaging. Not so for solar and lunar. With planetary imaging you want short enough exposure to freeze the seeing - but not shorter than that to get good SNR per sub because planets are not as bright. With Solar and Lunar - one gets plenty of light and exposures down to 2ms are possible without too much impact on SNR. High FPS here counts. Read noise. This one is simple - lower read noise, the better. With long exposure imaging you can control impact of read noise by using suitable exposure length. With lucky imaging you can't do that because you aim for very short exposure. Each exposure brings in one dose of read noise - it is very obvious that lower read noise is beneficial here. + for ASI290 with read noise ~1e vs read noise of ASI174 ~3.5e QE, well this is self explanatory thing - ASI174 has about 77% vs ASI290 and its 80%. Small advantage for ASI290. Add those 4 glass / air surfaces of Powermate and this difference increased a bit more. For planetary - ASI290 wins even if it has lower fps. How about for Solar Ha? Here ASI174 wins. Solar Ha is usually recorded at F/15-F/30 because of the way Ha filters work. It is much easier to get to optimal sampling rate with large pixels here then it is with small pixels. Optimum sampling rate with 5.86um pixels in Ha wavelength is F/20.75 Optimum sampling rate with 2.9um pixels in Ha wavelength is F/10.25 It is much easier to aim for F/20 with Ha scopes than it is for F/10 - one needs focal reducer. Another option is to use x2 binning on ASI290. Since it is software binning - that raises read noise by factor of x2 - it is no longer ~1e, now it will be ~2e - much closer to 3.5e Then there is matter of sensor size - ASI174 can cover more of the disk because of larger surface of the chip and finally - when there is enough signal, you don't need to worry that much about 1. Exposure length (fps is utilized) 2. Read noise - because there is now other dominant noise source - photon shot noise from strong signal 3. QE of sensor - because you have enough signal In Solar Ha, it makes more sense to go with ASI174
  3. No, I don't mind, although it is quite technical, but I'm going to simplify it as much as I can. Thing is that every step in processing that requires resampling is affecting data. This is in particular important with planetary imaging if done prior to wavelets / deconvolution or any other frequency restoration technique. To show you what happens, I'm going to use - just Gaussian type noise as it has equal distribution of intensities over frequencies (it has some value at each frequency component and on average it is uniform at that). It is also rather easy to see effect of this additional blurring on pure noise. Ok, so here is the first image: This image contains same thing - pure Gaussian noise, in fact - it is the same image - left side is unaltered, while right side has been just translated by a half of a pixel using linear interpolation (I made a copy, did translation and pasted half of it over original image). You should be able to tell that right side looks blurred compared to left - noise grain is not as "sharp" as on the left. People doing long exposure imaging that use DSS will recognize this - background noise looking coarse grained. This is because deep sky stacker uses linear interpolation when aligning frames. We can actually do some fancy math in Fourier domain to qualify this blur. Shifting image in spatial domain does nothing in frequency domain (it just shifts all phases by same amount - intensities remain the same). If I make Fourier transforms of original noise and shifted noise image and divide them we will get MTF of interpolation used for translation of the data. Look what happens: Left is frequency spectrum of original noise image while right is frequency spectrum of translated noise image (using linear interpolation). They are clearly different and there seems to be some sort of attenuation of frequencies going on due to resampling. Low frequencies are towards the center of the image while high frequencies are toward the edges. Attenuation is in high frequencies. If we divide these two images we will get attenuation function - here it is: Here is profile of this 2d curve plotted in 3d: And here is profile of line going from center in X direction: Now if this reminds you of telescope MTF that looks like this: then you would be right - it is the same thing - but applied once more over data. It has slightly different shape - as it affects data in slightly different way - attenuates lower frequencies less than telescope aperture - but remember these are cumulative - they are in fact multiplying each other (and atmospheric MTF as well - at each step it multiplies). Thing is - we can't avoid telescope MTF and atmospheric MTF but we can avoid additional step if we don't subject data to yet another high frequency attenuation. In reality, things are not going to be as bleak if we use advanced resampling techniques in stead of simple linear interpolation. Here is same MTF of interpolation but this time done for Quintic B-Spline interpolation: As you can see - this is much better - it almost does nothing to frequencies up to half of highest frequency in the image. In ideal world - we could use Sinc function for filtering - it has perfect rectangular response - so no frequencies below max frequency is attenuated: unfortunately - Sinc function is infinite in extent and can't be applied to finite images. Ok, now derotation is not simple translation. It "translates" each different pixels by different amounts. This leads to sort of "wavy" frequency attenuation - some parts of the image get blurred more and some blurred less. Sharpening such image with wavelets leads to artifacts as you can imagine. Hope this explanation was understandable and useful?
  4. Ok, so to really understand what I've calculated above, let's examine a few cases: 1. Sequence shot with OSC camera. In this case - you can shoot for ~22 minutes without any need for derotation of video. Stacking software like AS!3 is capable of compensating for any rotation between first and last frame (and hence any other two). This is due to fact that AS!3 uses alignment points to compensate for seeing induced warp of the image - which is very similar to rotation. 2. Three different sequences are shot - each for one filter. Now same rule applies as above - you can shoot each filter for 22 minutes and within that video there will be no need for derotation - AS!3 will handle this. However - in order to compose RGB image out of three separate images - you'll need to do derotation of two of them to match the third one. This is true even if you shoot 7 minute videos for total of 21 minutes (being less than 22 that we calculated). This happens because AS!3 creates reference frame out of good frames for duration of the video and determines true position of alignment points as average of positions across those subs. As a consequence - resulting stack is snapshot of the planet as it was mid way thru particular video. If we take for example 7 minutes for each channel - each channel image will have no issues with motion blur due to rotation - AS!3 takes care of this, but each of channel stacks will represent image of Mars at different time - 7 minutes apart with times about midway thru each of corresponding videos. In order to compose image - you'll need to derotate two of those frames to align with third properly as they are effectively "shot" at different times. In the end - since you'll need to derotate channels for alignment - you might as well shoot up to 20 minutes per channel - that way no channel is going to have motion blur by itself - yet you'll need to do derotation anyway. My advice would be to use following sequence: - shoot R or B first - shoot G in the middle - shoot remaining channel last (depending if you chosen to go with R or B first - you'll need to shoot B or R last - hope this makes sense) In the end leave G as is - that will be reference point and don't derotate that one - but rather derotate first channel forward in time to match G and derotate last channel back in time to match G. Why G as reference - well cameras tend to have the greatest QE in that range and also, human vision is similarly geared towards green carrying the most information in terms of contrast and sharpness. For this reason - it should be the best channel of the three and since derotation slightly blurs data - best to leave G as reference and derotate other two - hope this makes sense as well I'm software engineer / system architect.
  5. Yes, within limits. Lowest magnification that one can easily get is about x71 (with 55mm fl eyepiece). For comparison, with 8" F/6 scope that is like using 17mm eyepiece. It will only show about 0.7 degrees of the sky. If one accepts that this is is narrow field of view / high power telescope, then yes - quite usable for visual.
  6. Just because something is rendered brighter on the screen does not mean it has more signal! ISO setting is just multiplication factor. You used same scope and same exposure time and same camera for both of these two images. Images received same number of photons - same signal. If you have say 100 photons and you use multiplication factor of 8 in one instance and get ADU value of 800 and multiplication factor of 2 in second case and get ADU value of 200, does that mean that somehow signal got stronger? That is simply brilliant - all one needs is single exposure and the we just multiply with very large number and get very strong signal! No need to spend hour sand hours under the sky! However, that is not the case. It is not numerical value that we assign to signal that is important - it is signal to noise ratio and that one is fixed to number of photons we captured. SNR is equal for both images (at least that part coming from photons). Why the recommendation to use ISO200 then? It has to do with other noise sources, particularly read noise. ISO200 is probably "sweet spot" - as ISO gets larger read noise gets smaller (good thing) and so does full well capacity (not good thing). ISO200 is likely to be the best balance of the two - giving lowest read noise for highest full well capacity. In the end - when you stack your images, it is highly unlikely that you'll be able to tell the difference between the two - ISO200 and ISO800 as resulting noise difference is so small and can only be measured and not perceived by eye. Shoot which ever way is more convenient to you.
  7. There is no "formula" per se, but we can devise one if you are up to it. Let's do a case study and see what kind of formula we can come up with. First thing and very important one to understand is that AutoStakkert!3 can stack subs that are quite shifted from original position. It can for example stack lunar shots that experience white a bit of motion frame to frame due to seeing. In fact I believe this to be the size of alignment point - each alignment point is "searched" to match frame to frame. In any case this is our first variable, and if we want to be conservative about this we can say - let's have max motion of 5 pixels here. This means that max motion between first and last frame of some feature due to rotation of the planet to be 5 pixels in total. You can put different value here if you wish, I think 5 is good enough for this calculation. How much is 5 pixels in arc seconds? Depends on your sampling resolution. For example, let's say you are sampling at 0.21"/px (this is optimum sampling rate for 10" scope). How much is 5 pixels in arc seconds? well - that one is easy 5px x 0.21"/px = 1.05" Fastest moving feature on surface of the planet must not move more than 1.05" for duration of our recording. Let's say we are talking about Mars here. Fastest moving feature due to rotation is at equator, facing directly us. How fast does that move in some units? Depends on diameter of the planet and rotation speed. Let's do case of Mars - Mars does one full rotation in 1 day and 37 minutes, or 1477 minutes. It has radius of 3389.5km. This means that circumference of Mars at equator is 2 * PI * 3389.5 = ~21296.86 km. Feature moves this distance in one whole revolution so speed of motion is ~14.419 kilometers per minute. Now question is - what angle 14.419 kilometers make at distance equal to current Mars distance? Current Mars distance is 63,423,358 Km 14.419 Km at distance of 63,423,358km is equal to 0.04689" (use this calculator http://www.1728.org/angsize.htm or a bit of trigonometry) So we have 0.04689" per minute, yet we can't let feature move more than 1.05" - so how many minutes is that? ~22.4 minutes You can actually record for 22.4 minutes without first and last frame having more than 5 pixels of motion at Mars equator - something AS!3 can easily handle since it can handle 5px of motion due to seeing. Article recommends that you should limit your recording to 90 seconds for mars, and we just calculated that you can use 22.4 minutes without any issues if you are using 10" scope and using optimum sampling rate.
  8. Very interesting methodology for testing out eyepieces. This gives me all sorts of ideas. Maybe MTF graphs could be used, or at least artificial star images?
  9. Very strange. I don't think spectacle-wearers will be able to use this eyepiece with ease. I sometimes feel there is enough eye relief but sometimes I do really put an effort to "get close inside". Probably has to do with scopes used - not every scope produces same eye relief. Eye lens is recessed so it's definitively not stated ~15mm in real use. I have long eye lashes and since eye lens is rather large - I manage to hit it often and leave smudges. Having said that, ES82 11mm is the best eyepiece I ever used in terms of sharpness and other features (except for above strange "comfort" of use).
  10. I personally use ImageJ for both calibration and stacking, but I think it is far easier to stick with DSS for stacking and only do calibration in different software - like ImageJ of something else (I don't really know that much about other software like Siril or similar - but I do know that you can do calibration there - maybe someone else will give advice here).
  11. You need to use sigma clip stacking here to remove trailing on hot pixels. DSS is also not the best tool to calibrate your subs. It uses 16 bit format for that and sometimes that lacks precision. Hot pixels should be removed by darks, but they seem not to be properly removed and there is slight drift sub to sub which creates streaks. 7 subs is a bit low number to effectively use sigma clip, so there might be problem with that as well.
  12. Hi Dave and welcome to SGL. Probably best camera choice for that little scope would be dedicated cooled CMOS camera. These are relatively cheap these days. What would be your budget and expectations? Remote telescope operation is possible and in fact, most of imaging session is spent in front of the laptop. There are a few things to be done first (setting up, aligning scope, focusing if you don't have electronic focuser and such things), but for the most part one is just sitting in front of their laptop. 6 meters is a bit long distance for cables, and although it can be done with powered USB hub, maybe better option would be to go truly remote. Have one computer to control the mount and camera that is placed next to the telescope and use another computer to connect to first one via either Remote desktop (windows) or sort of VNC protocol (linux). Computer that controls telescope, mount and camera need not have all the accessories - it can be either laptop or some sort of "NUC" / small factor computer. Quite popular solution for the tech savvy is to use Raspberry PI at telescope and connect to it from linux based work station over network for example. Out of the box light weight solution would be ZWO ASI-AIR - wireless small computer that you use by connecting to it via tablet or smart phone. Software will depend on your choice of equipment for this. You can either go with Linux based systems (free / open source) - where you will have to use INDILIB and EKOS / KSTARS (people report that it is working very well if you are concerned about features and stability) or Windows based system where you'll have to use ASCOM platform. There you have variety of software for capturing images - APT is good for DSLR type cameras, NINA is open source, SGLite / SGPro is very good payed solution (they recently changed subscription model and there is some concerns in community if it's worth the money now) and many more. I'm sure MacOS eco system is also available but I don't really know much about it. In the end, there is no really simple and straight forward answer to such a broad question. What you could do is provide a bit more information on your setup (mount) and how you plan to use it, as well budget and expectations. That way we can narrow down possible choices.
  13. I would go with simple red dot finder for 72ed. Using 32mm plossl with that scope will give you something like x13.5 magnification - that is "finder territory", so you really need only simple pointing device attached to scope.
  14. Well, here is one: https://www.firstlightoptics.com/ovl-eyepieces/ovl-nirvana-es-uwa-82-ultrawide-eyepieces.html Not sure how it performs in fast scope though, but it has the right specs - 82 degrees and 16mm FL. A bit more expensive, but probably optically much better: https://www.firstlightoptics.com/explore-scientific-eyepieces/explore-scientific-82-degree-series-eyepieces.html at 14mm - somewhat shorter FL.
  15. Here are a few contenders, but I can't give any more advice since I have not used any of them. Best to get first hand advice or read some reviews budget: https://www.teleskop-express.de/shop/product_info.php/info/p1347_William-Optics-SWAN-40-mm-2--Super-Wide-Angle-Eyepiece---72--Field.html https://www.teleskop-express.de/shop/product_info.php/info/p9299_APM-Eyepiece-UW-30-mm-80---2--barrel-size.html (might be a budget eyepiece but I think it's rather good - only thing is that it has 41mm field stop - not as large as other eyepieces at 45-46mm - so narrower field of view regardless of the fact AFOV is 80 degrees. However, 30mm might not be a bad thing - darker background sky due to smaller exit pupil) A bit more expensive: https://www.teleskop-express.de/shop/product_info.php/info/p9549_Explore-Scientific-62--LER-Eyepiece-40-mm--argon-purged.html (again, smaller field stop diameter at about 42mm - but this time due to smaller AFOV of 62) https://www.teleskop-express.de/shop/product_info.php/info/p1754_Baader-Hyperion-Aspheric-36-mm---72--Wide-Angle-Eyepiece.html Best options (over budget): https://www.teleskop-express.de/shop/product_info.php/info/p5599_Explore-Scientific-40mm-2--Eyepiece---68----waterproof.html (out of stock currently) https://www.teleskop-express.de/shop/product_info.php/info/p1098_Vixen-LVW-42-mm-2---eyepiece---65--wide-angle.html And finally, I won't even mention TV Panoptic 41mm since it is at least twice the budget that you have Hope this helps
  16. No, you should not target certain histogram value. You should target certain sampling rate (combination of barlows to get best "zoom" - or max details for given aperture but not too much "zoom" - beyond what scope can deliver because it just lowers SNR) and exposure time. These are two variables that you should adjust for given "equipment profile" (gain or rather read noise is considered part of equipment profile). You want your exposure time to be at or just below coherence time for your site on a given night so that you freeze the seeing (usually about 5-6 ms for average conditions) - but not lower as again you loose SNR that way. Once you establish above parameters - you'll have certain histogram value - be that 20% or 80% - as long as there is no clipping / over exposure - you are fine. Max capture length you can calculate easily by knowing few facts about your target and equipment. See example in above linked thread. Use calculated value as a guide for max capture length (without the need for derotation). Smaller the scope or coarser the working resolution - longer you can capture for ...
  17. A few pointers. You are now having more issues with DEC axis than with RA. Try dealing with those - possibly balance issues and wind / shake. Make sure you remove all backlash, or at least as much as you can from the DEC axis. Simple finder guider will not have enough precision to measure smaller RMS error. With 4.75"/px you can measure RMS down to about 0.7" in either axis. I have modified my HEQ5 quite a bit (belt mod, changed bearings and tuned, berlebach planet tripod, changed clamps, ...) and this is the result: 0.36" RMS - measured with 1600mm OAG at about 0.96"/px (3.75um pixel camera binned x2). Mind you this was particularly calm night. Regular values are around 0.5-0.6" RMS total.
  18. I did not read the article, just glimpsed over it, but there are two things wrong straight away with it. - first is advice to target certain histogram value - second is to limit capture for planets to very short times (without mention of capture resolution) - like Mars 90 seconds or Jupiter 45 seconds To see if 90 second is max for Mars - please look at this post: and also subsequent test performed based on these calculations:
  19. You also seem to have changed guiding system? In first image your guide resolution is 4.75"/px while in the second image 0.58"/px I think that most of your improvement is from wrong numbers entered. Belt mod can't improve DEC performance as DEC is stationary and depends mostly on seeing. If you examine your numbers on the right - DEC did not change much - it is ~0.21-0.22 pixels in both cases - which is to be expected. Your RA dropped from 0.16 pixels to 0.12 pixels and if we calculate with 4.75"/px guiding resolution - that is improvement from 0.76" to 0.57" - which is to be expected. That second graph is not correct, points another clue - HEQ5 simply can't achieve precision of 0.14" RMS. This is territory of 6-10 times more expensive mounts. In best conditions, HEQ5 can go down to 0.5" or just a bit below, but seeing needs to be excellent and there must be no wind. Mount needs to be tuned and modified to achieve this.
  20. Stretching color can make it turn pastel as it changes the RGB ratios for non linear stretch. Try following approach: - create artificial luminance (just add all three linear channels with pixel math for starters - that will create usable luminance) and stretch that to your liking - do what you already did - make linear combination of RGB (SHO) - paste stretched luminace as new layer on top of linear rgb and set its blend mode to luminance (LAB color model - Gimp does this, not sure if PS has that option, it should). This will make color transfer keeping RGB ratios (I hope).
  21. I think that would be my main usage scenario - in the field imaging where I need to connect AzGTI, RPI and android phone together into sort of LAN. I guess it should work ok in that case.
  22. Why is that? I'm planing to add RPI control to AzGTI and want to have it act as access point for both AzGTI and my phone controlling the mount. I'm hoping that will be more stable and better than AzGTI being access point.
  23. Wouldn't pair of reading glasses be enough since you already sitting next to it? I can see why would one want binoculars if they are in their house but don't want to use VNC or SSH .
  24. I doubt that anyone can tell the difference since AA version is not yet available. They are now accepting pre orders for October and November of this year. Maybe this is introductory price, or maybe it's marketing / competition thing - to get to larger piece of market. Maybe it has to do with import fees. Who knows. We await brave souls to test these and if they perform on par with others - well then prices will go down.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.