Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

vlaiv

Members
  • Posts

    13,106
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. Star spikes are actually perpendicular to original spider supports - this is why you have 6 of them on 3 prong spider support.
  2. Or in another words - projection system from angles to linear distance, where certain angle always forms same distance (to a good approximation)?
  3. I had that scope and I measured it long time ago but I can't remember exactly what it was. About 3.5kg or something like that.
  4. I'm afraid I don't really follow your logic / argument here. You say that imaging system has linear magnification - but that means that any object of size X will be mapped to image with size Y. If you check, both the Moon and the Sun, while having vastly different diameters will form images of the same size. Hopefully OP will find good enough answer in non technical stuff as I'm certain that all participants set off to answer original question the best they could. I do agree that it is rather easy to slip into much more technical discussion that underpins issue discussed. @inFINNity Deck maybe we could continue this discussion over PMs once I read the article / post you referred to, or perhaps @Stu could be so kind to split technical part into thread of its own if there is enough valuable info to be found for other members of SGL?
  5. Name is Vladimir, but feel free to call me Vlad With regards to critical sampling rate, 2.4px per Airy disk radius or 4.8px per Airy disk diameter is based on Nyquist sampling theorem. In fact, I did not do the math, but rather set of simulations with different airy disks and FFT to determine cut off point in frequency domain. Many people assume that x2 from Nyquist has something to do with spatial features - like Rayleigh criterion or similar - it does not. Sometimes I read that people say x3 or even x3.3 is better as we are dealing with 2d Nyquist instead of 1d case - again not true, x2 max frequency is again correct criterion in 2d - grid sampling case. This applies to band limited signal and indeed telescope optics provides band limited signal - point is, finding what frequency represents cut off point. If you generate airy disk PSF (that would be FFT of aperture - so simple circle or maybe obstructed circle with/without spider support) you can also generate MTF of that - which is FFT of airy disk. That represents airy disk PSF in frequency domain. It is also description of blur that optical system introduces - how much attenuation there is for a certain spatial frequency. In this case, X axis of MTF that I marked is in relative frequency units, but in general - it is in absolute frequency units that can be expressed as cycles per arc second. At one point, here marked with 1 - MTF falls to 0 and there are no higher frequencies - all have been attenuated to 0. This is our cut off point. Mathematically, you need to take Airy disk function and do Fourier Transformation of it and then equate that with 0 and find at what frequency it reaches 0 in order to find cut off frequency. I might do that some day, but it is not easy thing to do as Airy disk function is not trivial. In any case, I did simulations with different airy disk sizes and "measured" where resulting cut off point is (using FFTs to generate both airy disks and MTFs). It turns out that relationship is about x2.4 of airy disk diameter. Later I found out that this value could be related to 1.22 constant of first zero of corresponding Bessel function and that actual value that I measured above is 2.44 rather than 2.4 - but I'll need to see some math to confirm this. As for relationship to FWHM - theory behind is the same, except that I used gaussian profile to model PSF and then did Fourier Transform of Gaussian (that was easy to do), and looked at cut off frequency - which I took to be when attenuation is such that frequency is attenuated to less than 1% of original value. I then took that frequency and based on that and Nyquist criterion determined factor for "optimum" sampling rate. I wrote about both of these approaches here on SGL. Let me see if I can find corresponding threads.
  6. I was thinking of messing with mesh size - that is obviously way to go. Regardless, I might need to touch it up by hand as it mistakes some bright nebula patches for stars and removes those as well.
  7. I have not forgot about this thread, but I did run into difficulties for some reason. Starnet++ is not doing very good job on star removal with this data. Although on this scale it looks nice (still linear, but stars removed so much more nebulosity is seen): there are quite a few artifacts once zoomed in - which currently prevents me from doing decent processing - like these:
  8. There is no such thing as magnification when imaging. Magnification amplifies angles and is applicable when talking about visual for example - so telescope / eyepiece combination. When you view image on your computer screen - zoom will depend on how close you are to that screen - place it at 50cm away and it will be reasonably "zoomed in", but stand at 10m - and it will be very small. This shows that image does not have magnification. Telescope + camera is projection device. Telescope projects image onto the sensor. Two things are important here - field of view, which depends on telescope focal length and sensor size - how much of the sky are you projecting on the image? Second thing that is important is sampling resolution. That depends on focal length and pixel size (or sensor pixel count and field of view if you will - these things are connected because quantities depend on one another). With sampling rate - you can, over sample, under sample and sample just right (Goldilocks anyone? ). Under sampling is not so bad, it won't record all the detail that could be recorded, but it improves SNR. Over sampling is rather bad - you are "spending' resolution on empty detail (no detail to be captured) and in doing so you are lowering your SNR (signal to noise ratio - probably most important thing in imaging after mount ). Sampling just right is the way to go of course. Ok, that strange F/ratio > 3 x pixel_size_in_um formula is just wrong. If you are after optimum sampling rate of diffraction limited system (Without influence of atmosphere - for planetary / lucky type imaging), then correct sampling rate is 2.4 pixels per airy disk radius (4.8 per airy disk diameter). On the other hand, if we are talking about optimum sampling rate for long exposure imaging - we need to look at star FWHM, and in this case, close to optimum sampling rate is FWHM / 1.6 as a resolution arc seconds per pixel (if FWHM is expressed in arc seconds - so 1.6" FWHM needs 1"/px sampling rate). You are correct about color sensor sampling at twice the lower rate than monochromatic sensor - but many people don't process color images / separate color information in the way that respects that anyway (they interpolate instead of just splitting color planes).
  9. If it's not too much to ask - could that be just linear stacked (32 bit fits / tif) without DBE and deconvolution or any other processing?
  10. This is true only for light intensity - measurable / physical quantity of number of photons or energy per unit time. That is not the same as perceived brightness.
  11. That is because of things that I listed - but primarily because of non linearity of brightness response. I don't mean log based nature of human perception - but the fact that even expressed in magnitude scale our vision at low intensity becomes non linear. This is the reason why we have preferred exit pupil for DSO observing depending on LP levels, and also reason why filters work best at certain exit pupil. Because it creates the most perceived contrast - both sky and target are darkened by same amount - but to our perception sky gets darker than target - or rather perceived contrast increases.
  12. Do consider that faster scopes require more expensive eyepieces to produce equally pleasing images and that longer FL scopes make it easier to achieve high mags while short FL scope makes it easier to achieve low mags. If you primary interest is high power viewing - slower scope. If you enjoy wide field view more - faster scope.
  13. Here is another way to look at all of this - fast vs slow. Let's not fix aperture, let's fix magnification. We have certain eyepiece - for example 28mm 68 degrees. We want to use it at certain magnification so it gives us certain true field of view. Let that be x20. For this - we need telescope with 560mm focal length. We can use 80 f/7 scope for that or we can use 130mm F/4.3 scope for that. Faster one will provide us with brighter target as it has more of light gathering.
  14. Might be that origins of term fast/slow/speed have something to do with imaging - but even there, they are "wrong" . I tend to relate them to angle of the light beam at focus.
  15. Only when comparing two scopes with same aperture - same light gathering ability. Not the same if aperture size is different.
  16. Here we need to distinguish couple of concepts: - light intensity - visual magnitude - actual perception of brightness Light intensity is easy - it is number of photons or energy - in any case, physical - measurable quantity, a flux of sorts. Above is true for light intensity. People noticed long time ago that doubling the intensity of light does not produce twice as bright light in the eye of observer. We sense in logarithmic scale - hence magnitude system that is based on logarithm. What we see as one step increase in brightness is in fact multiple / power of intensity. This works well in regime where there is enough light, but when we step into threshold regime, things get even weirder - our brain starts doing some funky stuff to the image we see. We never see shot noise although on that levels of light - we should. Noise filter kicks in. We form image via "long integration" - we remember what we saw a few seconds ago and once we see it - it is easier to see it again because our brain combines signals with our expectations and memory. No light / absence of all light is not black. When we have no light and also have no reference to compare with - we see gray rather than pitch black. This effect is called EigenGrau - see https://en.wikipedia.org/wiki/Eigengrau In order to see sky background as gray - we need field stop and it needs to provide baseline value of being black. If there is too small difference between the two - we will see both as eigengrau - with no contrast difference (too high mag at too dark skies). Similarly we need enough contrast between target and background sky. There is that, but there is also real difference in scopes / glass used. Not all scopes are equal and not all eyepieces are equal. Not all fields are equally illuminated - there might be some vignetting which is higher at the edge of the field - where field stop / our reference black should be. Some eyepieces have lower transmission, but also some eyepieces impart certain tint to the image - like slightly yellowish / warm tone or slightly bluish / cold tone. Change of tone can change our brightness perception as not all colors are perceived equally bright (green is the brightest, red is dimmer and blue is dimmer still, but combinations can be brighter - like yellow, it is perceived brighter than green - this is for same intensity / flux). And there you go - why two seemingly same setups provide viewer with different experience.
  17. Agreed! 43mm 82 degrees eyepiece will be 3" - max 2" can offer is something like 40mm 70degrees (or maybe closer to 38mm).
  18. I would say that illuminated field has something to do with it as well. Most fast scopes are built to illuminate larger field? They can for this reason display wider true field of view than slower counterpart. Faster scopes are therefore - "richer field" telescopes - able to put more stars into single field of view and also able to put really large objects into FOV with enough context to make them stand out better. There is another factor - faster scopes can produce max exit pupil with reasonable eyepieces. Take F/10 scope, in order to have 7mm exit pupil - you would need 70mm eyepiece. You'll have hard time trying to get one. With F/5 scope - that is 35mm EP - easy (but not cheap if you want good correction all the way to the edge).
  19. That scope will be very good for planetary imaging. Just make sure you collimate it properly and add a barlow lens. You'll need something like x2.5 - x3 barlow, but any decent barlow will do since magnification factor of barlow depends on barlow sensor distance, so you can "dial in" wanted magnification by altering distance to sensor (adding spacers). It will work as DSO imaging rig as well just add x0.5 focal reducer - but make sure you understand limitations. First - exposure length. You will be sampling at very high rate (big zoom) - and any imperfection in tracking will show. You need to limit your exposures to probably something like 10s or even less. Second is field of view. It will be very small. For example, look at M13 with this combination: Or maybe M51: You won't be able to capture larger objects like M33 for example: In fact, here are some images that I captured with camera like that and 8" F/6 Newtonian + x0.5 GSO focal reducer: This means that you can get your feet wet with regards to DSO imaging with this camera, but I do agree, real step forward would be: 1. DSLR 2. Coma corrector 3. Small guide scope + this camera as guider
  20. Ok, I get it now. No, that won't work or be beneficial. In fact - it will work in one particular case, but it won't be beneficial. Let me explain. Consider two cases - H alpha 5nm band contains both Ha signal and NII signal would be the first case. Second case would be - it contains only Ha signal. We won't consider third case - only NII signal as it makes no sense as you would end up with "blank image" (NII - NII = 0). First case: From SNR perspective, capturing both signals at the same time is beneficial. It improves SNR as there is more signal that you capture. Now you could take another image of NII and remove NII signal from first image, but results would be far worse than doing pure Ha 3nm. First - 5nm passes more LP than 3nm, that means 5nm Ha + NII will have more LP and associated noise. When you remove NII signal - you won't remove noise associated with it. In fact you'll add a bit more shot noise associated with it - one from "standalone" NII image. You'll also add all other noise in NII image. In the end, you will end up with Ha signal / (Ha signal shot noise + LP noise 5nm + LP noise 3nm + NII shot noise + NII shot noise + read and dark noises from both images) vs Ha signal / (Ha signal shot noise + 3nm LP noise + single dark and read noise) if you use 3nm Ha signal. Compare that to original Ha signal + NII signal / (Ha signal shot noise + NII shot noise + 5nm LP noise + single dark and read noise) If NII is strong, 5nm filter could produce better SNR than 3nm that captures only Ha Second case: There is not much to this - because 3nm will not capture any signal - it will just inject read noise, lp noise and dark current noise into your 5nm image - nothing gained, quite a bit of noise injected.
  21. Why don't you just go with regular plossl in 20mm version? Something like Vixen NPL will be cheaper and will offer almost the same specs - 15mm eye relief and 50 degree FOV. Alternatively, if you want larger FOV - maybe 62 degree line? 68 degree line will almost certainly be better but is quite a bit more expensive.
  22. Very decent camera for planetary imaging but not very suited for DSO imaging. It will largely depend on scope that you plan to put it on. It has rather small sensor and if you have short focal length scope - addition of x0.5 focal reducer will enable you to do some EEVA with it - which is very similar to imaging and you will be able to record your live sessions to process further. One advantage of that camera is - if you progress with your DSO imaging further - it will work nicely as a guide camera.
  23. Maybe you could give PEM PRO a try? There is trial license. Here is what you should do: 1. record a video of a star being tracked (no guiding). For this you need either web cam or ascom camera. Maybe try to see if your DSLR has ASCOM drivers that can be used for this. Alternatively see if PEM PRO can work with AVI files or other movie file (like ser) 2. Generate PEC curve in PEM PRO 3. PEM PRO says on their website that it can upload PEC curve to most mounts - so I guess it can do that to Eqm-35 as well?
  24. I would say that second is iTelescope. I base my answer on diffraction spikes orientation, my guess is that remote telescope operators is going to align their camera to one of the axis and scope is probably mounted similar - which ensures diffraction spikes to be aligned with sensor?
  25. If you already have NII filter - why would you want to extract it from Ha? You have means to record it as is. I suppose that original post suggested that you can get NII data by using 5nm Ha and 3nm Ha. 5nm Ha is wide enough to have both Ha and NII data combined as their wavelengths are 656.461 and 658.527, so 656.5 +/- 2.5 => 654nm to 659nm and that includes 658.5, while Ha 3nm will not do that 656.5 +/- 1.5 => 655nm to 658nm does not include 658.5nm. 5nm will be Ha signal + NII signal, while 3nm will be only Ha signal and therefore 5nm image - 3nm image = Ha signal + NII signal - Ha signal = NII signal. I would not worry much about red component of LP - as it will be offset or maybe very slight gradient - but that thing can be wiped out from the image.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.