Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

vlaiv

Members
  • Posts

    13,098
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. If it's not too much to ask - could that be just linear stacked (32 bit fits / tif) without DBE and deconvolution or any other processing?
  2. This is true only for light intensity - measurable / physical quantity of number of photons or energy per unit time. That is not the same as perceived brightness.
  3. That is because of things that I listed - but primarily because of non linearity of brightness response. I don't mean log based nature of human perception - but the fact that even expressed in magnitude scale our vision at low intensity becomes non linear. This is the reason why we have preferred exit pupil for DSO observing depending on LP levels, and also reason why filters work best at certain exit pupil. Because it creates the most perceived contrast - both sky and target are darkened by same amount - but to our perception sky gets darker than target - or rather perceived contrast increases.
  4. Do consider that faster scopes require more expensive eyepieces to produce equally pleasing images and that longer FL scopes make it easier to achieve high mags while short FL scope makes it easier to achieve low mags. If you primary interest is high power viewing - slower scope. If you enjoy wide field view more - faster scope.
  5. Here is another way to look at all of this - fast vs slow. Let's not fix aperture, let's fix magnification. We have certain eyepiece - for example 28mm 68 degrees. We want to use it at certain magnification so it gives us certain true field of view. Let that be x20. For this - we need telescope with 560mm focal length. We can use 80 f/7 scope for that or we can use 130mm F/4.3 scope for that. Faster one will provide us with brighter target as it has more of light gathering.
  6. Might be that origins of term fast/slow/speed have something to do with imaging - but even there, they are "wrong" . I tend to relate them to angle of the light beam at focus.
  7. Only when comparing two scopes with same aperture - same light gathering ability. Not the same if aperture size is different.
  8. Here we need to distinguish couple of concepts: - light intensity - visual magnitude - actual perception of brightness Light intensity is easy - it is number of photons or energy - in any case, physical - measurable quantity, a flux of sorts. Above is true for light intensity. People noticed long time ago that doubling the intensity of light does not produce twice as bright light in the eye of observer. We sense in logarithmic scale - hence magnitude system that is based on logarithm. What we see as one step increase in brightness is in fact multiple / power of intensity. This works well in regime where there is enough light, but when we step into threshold regime, things get even weirder - our brain starts doing some funky stuff to the image we see. We never see shot noise although on that levels of light - we should. Noise filter kicks in. We form image via "long integration" - we remember what we saw a few seconds ago and once we see it - it is easier to see it again because our brain combines signals with our expectations and memory. No light / absence of all light is not black. When we have no light and also have no reference to compare with - we see gray rather than pitch black. This effect is called EigenGrau - see https://en.wikipedia.org/wiki/Eigengrau In order to see sky background as gray - we need field stop and it needs to provide baseline value of being black. If there is too small difference between the two - we will see both as eigengrau - with no contrast difference (too high mag at too dark skies). Similarly we need enough contrast between target and background sky. There is that, but there is also real difference in scopes / glass used. Not all scopes are equal and not all eyepieces are equal. Not all fields are equally illuminated - there might be some vignetting which is higher at the edge of the field - where field stop / our reference black should be. Some eyepieces have lower transmission, but also some eyepieces impart certain tint to the image - like slightly yellowish / warm tone or slightly bluish / cold tone. Change of tone can change our brightness perception as not all colors are perceived equally bright (green is the brightest, red is dimmer and blue is dimmer still, but combinations can be brighter - like yellow, it is perceived brighter than green - this is for same intensity / flux). And there you go - why two seemingly same setups provide viewer with different experience.
  9. Agreed! 43mm 82 degrees eyepiece will be 3" - max 2" can offer is something like 40mm 70degrees (or maybe closer to 38mm).
  10. I would say that illuminated field has something to do with it as well. Most fast scopes are built to illuminate larger field? They can for this reason display wider true field of view than slower counterpart. Faster scopes are therefore - "richer field" telescopes - able to put more stars into single field of view and also able to put really large objects into FOV with enough context to make them stand out better. There is another factor - faster scopes can produce max exit pupil with reasonable eyepieces. Take F/10 scope, in order to have 7mm exit pupil - you would need 70mm eyepiece. You'll have hard time trying to get one. With F/5 scope - that is 35mm EP - easy (but not cheap if you want good correction all the way to the edge).
  11. That scope will be very good for planetary imaging. Just make sure you collimate it properly and add a barlow lens. You'll need something like x2.5 - x3 barlow, but any decent barlow will do since magnification factor of barlow depends on barlow sensor distance, so you can "dial in" wanted magnification by altering distance to sensor (adding spacers). It will work as DSO imaging rig as well just add x0.5 focal reducer - but make sure you understand limitations. First - exposure length. You will be sampling at very high rate (big zoom) - and any imperfection in tracking will show. You need to limit your exposures to probably something like 10s or even less. Second is field of view. It will be very small. For example, look at M13 with this combination: Or maybe M51: You won't be able to capture larger objects like M33 for example: In fact, here are some images that I captured with camera like that and 8" F/6 Newtonian + x0.5 GSO focal reducer: This means that you can get your feet wet with regards to DSO imaging with this camera, but I do agree, real step forward would be: 1. DSLR 2. Coma corrector 3. Small guide scope + this camera as guider
  12. Ok, I get it now. No, that won't work or be beneficial. In fact - it will work in one particular case, but it won't be beneficial. Let me explain. Consider two cases - H alpha 5nm band contains both Ha signal and NII signal would be the first case. Second case would be - it contains only Ha signal. We won't consider third case - only NII signal as it makes no sense as you would end up with "blank image" (NII - NII = 0). First case: From SNR perspective, capturing both signals at the same time is beneficial. It improves SNR as there is more signal that you capture. Now you could take another image of NII and remove NII signal from first image, but results would be far worse than doing pure Ha 3nm. First - 5nm passes more LP than 3nm, that means 5nm Ha + NII will have more LP and associated noise. When you remove NII signal - you won't remove noise associated with it. In fact you'll add a bit more shot noise associated with it - one from "standalone" NII image. You'll also add all other noise in NII image. In the end, you will end up with Ha signal / (Ha signal shot noise + LP noise 5nm + LP noise 3nm + NII shot noise + NII shot noise + read and dark noises from both images) vs Ha signal / (Ha signal shot noise + 3nm LP noise + single dark and read noise) if you use 3nm Ha signal. Compare that to original Ha signal + NII signal / (Ha signal shot noise + NII shot noise + 5nm LP noise + single dark and read noise) If NII is strong, 5nm filter could produce better SNR than 3nm that captures only Ha Second case: There is not much to this - because 3nm will not capture any signal - it will just inject read noise, lp noise and dark current noise into your 5nm image - nothing gained, quite a bit of noise injected.
  13. Why don't you just go with regular plossl in 20mm version? Something like Vixen NPL will be cheaper and will offer almost the same specs - 15mm eye relief and 50 degree FOV. Alternatively, if you want larger FOV - maybe 62 degree line? 68 degree line will almost certainly be better but is quite a bit more expensive.
  14. Very decent camera for planetary imaging but not very suited for DSO imaging. It will largely depend on scope that you plan to put it on. It has rather small sensor and if you have short focal length scope - addition of x0.5 focal reducer will enable you to do some EEVA with it - which is very similar to imaging and you will be able to record your live sessions to process further. One advantage of that camera is - if you progress with your DSO imaging further - it will work nicely as a guide camera.
  15. Maybe you could give PEM PRO a try? There is trial license. Here is what you should do: 1. record a video of a star being tracked (no guiding). For this you need either web cam or ascom camera. Maybe try to see if your DSLR has ASCOM drivers that can be used for this. Alternatively see if PEM PRO can work with AVI files or other movie file (like ser) 2. Generate PEC curve in PEM PRO 3. PEM PRO says on their website that it can upload PEC curve to most mounts - so I guess it can do that to Eqm-35 as well?
  16. I would say that second is iTelescope. I base my answer on diffraction spikes orientation, my guess is that remote telescope operators is going to align their camera to one of the axis and scope is probably mounted similar - which ensures diffraction spikes to be aligned with sensor?
  17. If you already have NII filter - why would you want to extract it from Ha? You have means to record it as is. I suppose that original post suggested that you can get NII data by using 5nm Ha and 3nm Ha. 5nm Ha is wide enough to have both Ha and NII data combined as their wavelengths are 656.461 and 658.527, so 656.5 +/- 2.5 => 654nm to 659nm and that includes 658.5, while Ha 3nm will not do that 656.5 +/- 1.5 => 655nm to 658nm does not include 658.5nm. 5nm will be Ha signal + NII signal, while 3nm will be only Ha signal and therefore 5nm image - 3nm image = Ha signal + NII signal - Ha signal = NII signal. I would not worry much about red component of LP - as it will be offset or maybe very slight gradient - but that thing can be wiped out from the image.
  18. I think that USB connection is abstracted away too much from software (good thing for software developers). It just acts as serial connection to software and the best thing you can do is just send / receive over it. Controllers handle any issues with noise / speed of transfer / possible errors and correction mechanism ....
  19. Don't really see why these filters need to be used with OSC camera? Usage with mono is allowed and if I have something to say - encouraged We use LRGB as imaging model - why not Multiband + H/OIII/SII model?
  20. I agree, does that mean that what is captured in the image is not part of reality or it shows that human vision is lacking in some ways? Would you call this image Photoshopped? No human being will ever be able to see something like this - because it was recorded in UV light.
  21. I resent this term being used to describe treatment of astronomical images as it implies changing contents of the image that will dramatically impact truthfulness of subject being displayed. Many imagers go to great lengths to preserve documentary value of astro photographs and try not to alter image in any way other than to show what has been captured by camera sensor. Distinction between visual appearance and photograph of astronomical object comes from difference in sensitivity between human eye and camera sensor and also the difference in linearity. Camera sensor is mostly linear while human vision is logarithmic in nature (that is why we have magnitude system - what we perceive as linear brightness decrease corresponds to log function). Process of digitally manipulating captured data has the purpose of transforming linear response from camera sensor into log space of human vision. It also "compresses" dynamic range of the target in less space that can be shown by a single image either on paper or on computer screen (remember, you can easily observe two stars with mag5 or higher difference at the eyepiece and that will be x100 intensity difference between them - try showing that nicely on computer screen with total of 255 different intensity levels, let alone on a printed photograph).
  22. Since the article is from 2017 - it might be some time before it hits the market
  23. In my experience, guide issues are again best solved "in hardware". What you are describing sounds more like issue with periodic error than guide scope sag / guide issues (well, it can possibly be solved with guide settings, but sometimes periodic error is fast enough that guiding simply can't deal with it efficiently). You can check if it is indeed periodic error by aligning your frame with RA/DEC - make RA direction be horizontal for example and DEC vertical. This is easily checked if you start exposure and slew mount in RA - you should get horizontal line from a bright star. If it is at an angle - rotate camera a bit until it is horizontal. Now just image as usual and later check direction of elongation of stars. But you are right, for sake of argument and for academic reasons - let's discuss what can be done in software. What you are describing here is known as PSF - or point spread function - way single point (or pixel) - spreads its value on adjacent pixels. You have made very simple case where PSF is just a single line L pixels long with uniform distribution. In reality PSF will be a bit distorted. Deconvolution does precisely what you have been trying to do. As long as you know PSF - description of how a single point spreads, you can reverse the process (to an extend - depending on how much noise there is). I played with this idea long time ago when my HEQ5 mount had quite a bit of periodic error (since I added belt mod and tuned my mount and I no longer have these issues). Here is an example of what I was able to achieve and description of how I went about it. By the way, tracking errors are much better candidate for deconvolution fixing than coma or field curvature - because PSF is constant across the image - whole image was subject to same spread of pixel values. Here is base image: You can see that stars are elongated in ellipses. I shot this image at fairly high resolution and as a consequence even small PE that I was not able to guide out resulted in star elongation. If you check, you will see that elongation is in RA direction (compare to stellarium for example). First step is to estimate blur kernel. I actually did following to do that: Convolution is a bit like multiplication - if you convolve (blur) A with B, you get the same result as you would if you conovolved B with A (this is a bit strange concept - blurring of PSF with your whole image - but it works). This helps us find blur kernel. I've taken couple stars from the image and deconvolved those with "perfect looking star". I did this for each channel. These are resulting blur PSFs - 5 of them. I then took their average value to be exact PSF. Testing it back on a single star: We managed to remove some of the elliptical shape of the star. Here is what final "product" looks like after fixing elongation issue: Of course, process of deconvolution raised noise floor and it was much harder to process this image (hence black clipping and less background detail) - but result can be seen - stars do look rounder and detail is better in central part of nebula. If you are a PixInsight user - you might want to check out dynamic PSF extraction tool (I think it is called something like that). I don't use PI so can't be certain, but I believe it will provide you with first step - average star shape over the image. From that you can deconvolve that with perfect star shape to get blur kernel and then in turn use that blur kernel to deconvolve whole image.
  24. What software did you use? Do you have image of blurred star for me to try and deconvolve?
  25. Deconvolution (sharpening) is very difficult process, while geometric distortion correction is fairly easy process. I explain why is that in a minute. These two are handled by different algorithms of course, and geometric distortion is readily available in software. For example in Gimp there is lens distortion plugin: That can easily reverse geometric distortion made by lens / telescope. Difference between these two operations can be explained like this. Imagine you have very large table and some glasses of water on table. Geometric distortion is just moving glasses of water around - changing their position. It is rather easy to correct for that - just move glasses back where they were (you need a way of figuring out how they were moved - but that is easy - you know position of the stars or you can take image of the grid like above to see how lens distorts the image). Blurring due to coma or field curvature process does something different - it takes a bit of water from the glasses and transfers to other glasses - it mixes the waver in the glasses - and now you need to "unmix" it back. That is much more difficult problem than geometric distortion - even if you take test shots. Btw - moving of the glasses is moving of the pixel values around, and mixing of the water is actually changing pixel values in exactly the same way - you take some of the value of one pixel and spread it around and add to values of other pixels - and you do that for all the pixels in the image - that is what blurring does. Deconvolution is not that common in software and if it is implemented - it is usually implemented for constant kernel - which is useful in regular blur - like defocus blur or seeing blur or maybe motion blur - where all the pixels in the image are affected in the same way. Coma blur and field curvature blur add another level of complexity - they change how "the water is spread around" depending on what pixel we are talking about. In the center of the image there is almost no coma and no field curvature (in fact there is none) and amount of it grows as we move away from the optical axis. It is very difficult to model and calculate how to get image back under these changing circumstances.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.