Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

vlaiv

Members
  • Posts

    13,032
  • Joined

  • Last visited

  • Days Won

    11

Everything posted by vlaiv

  1. That is very interesting. Here is profile of the brightest star in Ha crop that you attached above: Simple yet not very precise measurement puts left side of half height at about 12 and right side at about 18 - difference being 6px - same as above measurement in AstroImageJ - which gave around 5.8px to be average FWHM for this particular crop. Ok, I understand that it is hard to see, and I can offer one of three ways of explaining. In fact I'll do two right away. 1. I'll show you 2. I'll provide you with analogy 3. If you want, we can go thru complete math behind it so you can see that it is correct (or maybe we can spot the error if there is one) 1. Let's do demo: This is brightest star in that Ha crop - enlarged x10 - with nearest neighbor resampling used (which is really not resampling - but rather just drawing large squares instead of sample points - this is often reason people think pixels are little squares - other is that camera pixels are indeed almost squares - not quite but almost). Indeed, when we bin 4x4 - we get this: Almost only 3x3 pixels (not quite, some surrounding pixels as well have some value). Look what we get when we enlarge back this little star: We get that same star - just minus noise. Look how big it is - left one has 14px across and right one has 14px across - same diameter. Look what happens when I add random noise to right image - just some Gaussian noise: Yes - that star on the right was actually binned x4 and it now looks the same after we enlarged it and then added some noise that was removed when we binned it by x4 (x4 SNR improvement). 2. Let me try to use some analogy. Take a look at regular linear function - just a straight line. You can take 20 points on it and record all 20. You don't need all 20 of them to exactly describe that straight line - you need only two. You can throw away 18 points and you won't loose any precision - you'll still be able to draw that exact straight line. You won't have all the points on it - but you'll be able to calculate and draw any point on it. Line is highly regular - it has no bumps - no details, it is just straight - and for that reason - it requires only 2 points to know every point on that line. Let's go a bit more complex - let's take quadratic function. Here you no longer can know quadratic function with only 2 points. You can still record 50 or 100 points - but you don't need all of those points. You in fact need only 3 points to record quadratic function perfectly. You can throw away other points. Now we have introduced a bit of curvature - we no longer have a straight line - we can have one bump. It turns out that for any polynomial function there is exact number of points that you need to record that function. Larger the polynomial degree - more points you need, but also more "intricate" (or wavy) that function is. There is relationship between smoothness and waviness (if there is such word) of function to number of points that you need to record that function. This happens with images as well - or particularly with blurred images. Blur adds certain level of smoothness to image - and image can be viewed as 2d function where pixel intensity is value of that function at (X, Y) - pixel coordinates. Pixel becomes single point in 2d plain (rather than being a square). There is also certain relationship like above with polynomial functions - depending how smooth 2d function is - you need certain number of points to record it. More smooth it is - less points you need to record it. FWHM of star is telling you how smooth the image is - it shows you blur kernel that has been used to blur the image (more or less - see above discussion on how well star PSF describes blur of the image). It tells you how smooth the function is and from that you can determine how many points you need to record the image. Last part of analogy is - if you have 2 points on a line you can know every other point on a line. Here it goes - if you sample at certain resolution - you can restore image at higher resolution - you'll know value of smooth function at higher resolution -but it will not contain detail because detail was not there in the first place when you recorded it - blur was such that it smoothed out detail on that scale. 3. I won't go into complete math here unless you want me to, but just wanted to point out that it will require a bit of Nyquist theorem, a bit of Fourier transform - basic knowledge of Gaussian distribution and such - and we get basically same thing as with polynomials - we get rule on how much sampling points we need to use (sampling rate) on 2d function with certain smoothness (band limited signal) to completely record it. We can also discuss noise and how it behaves in all of that.
  2. Here is comparison that I did with text - only this time applied on OIII sub that you supplied: I used simple bicubic interpolation (nothing fancy it was easier to control exact position) to scale up. This is stretched difference and histogram (display range is from -0.003 to 0.003 and image was scaled to 0-1 range): Not much difference between these two images. I would bin x4 and leave as is - upsampling will produce same result that we discussed above - empty resolution. You can bin x2 everything and then in the end decide if you want to bin another x2 in software or if you have really good SNR - you can do good and careful sharpening of your data.
  3. Well, here it goes: 1. It would probably be beneficial to increase your exposure as background will drop by x4 if barlow is x2. How much? That depends how close are you now to ideal sub length. If you are spot on - use x4 longer subs. This is not necessary and in principle you can even keep your current exposure length but your final SNR will suffer a bit. 2. Yes, you will need x4 total exposure to compensate for x2 barlow to get same SNR. 3. You probably don't want to do this barlow thing. Most people are oversampling as is (very few are under sampled or properly sampled) and adding barlow will only create lower SNR per sub and in total (for same imaging time - hence recommendation above to increase both) without any real resolution benefit. What is your current sampling rate and what is your average FWHM? If going by equipment in your signature 200P and QHY8L - then there is some sense in adding x2 barlow if your guiding is good. But it will require special processing and you won't get much more close up. You are probably using linear debayering at the moment and barlow will be useful only if you use super pixel mode - which will result in same sampling rate. I'm tempted to say - don't bother with barlow.
  4. I suspected that PI got silly results - and indeed it seems to be the case. I'm rather reluctant to speak against PI because it will look like I'm bad mouthing their developers, but on several occasions I found that their algorithms and explanations don't make complete sense (photometric color calibration, their explanation for drizzle, their SNR estimation, ...). In any case, here is what AstroImageJ reports for some stars in Ha image: 5.81, 5.83, 5.72, 5.75, 5.86 - let's call that average of 5.8px or at resolution of 0.52"/px that gives FWHM of 3.016" or 3". You should be in fact bin to x4 rather than x2 as that will give you 2.08" which is closer to real resolution of 1.885"/px that this frame supports. OIII sub has values of 3.83, 3.85, 3.89, 3.5, 3.82, 3.7, 3.76 - I would again say that average is somewhere around 3.75px (maybe 3.8). This time we are sampling at 1.04"/px so resulting FWHM is 3.9". It is to be expected to 500nm be worse than 656 due to atmosphere. Ideal sampling rate here again is 2"/px as 3.9 / 1.6 = 2.4375 (actually it would be bin x5 as is closer with 2.6"/px)
  5. You are quite correct. In fact there are several scenarios to consider there. First, let me just say that star FWHM is very good indicator of captured resolution in well corrected systems. This means stars pretty much the same on axis and in far corners and color correction that is apochromatic. Also, system needs to be diffraction limited across range of wavelengths (this pretty much excludes achromats unless they are crazy slow). In the case we don't have above conditions met - different things might happen. One images with Ha filter and said scope - star FWHM matches resolution of Ha nebula. One images with lum filter - some stars will have larger FWHM than others - depends on star type. In general, red stars will have FWHM that is better match to Ha resolution than blue stars. However - we assume perfect focus here. What if focus was done using FWHM as a measure on blue hot star? Then Ha will be out of focus and resolution will suffer for that - FWHM is then again better match to Ha resolution although FWHM suffers from CA and Ha suffers from slight defocus. In fact, interesting experiment can be done - take any scope and OSC sub and debayer it using some simple technique like super pixel or splitting. Measure FWHM of each channel for single star - you will find them different. Could be that I accidentally swapped blue/red as I was not paying attention to bayer matrix orientation (green is easy - there are two of them so you can't miss). Scope is TS 80 F/6 APO, and sampling rate is around 2.6"/px (1.3"/px when raw, but this is every other pixel of bayer matrix). FWHM varies by 20% - but you can't really see that in the image. That is too small variation between red and blue to be noticeable - it might be that sub to sub variation in FWHM is as much as that. And this is well corrected scope, Strehl 0.95 in green and red and over 0.8 in blue.
  6. In principle - there is little doubt that it is. Stellar objects are far enough to be really considered point sources in our case (even closest stars have diameter in mas - milliarcseconds, for example Betelgeuse has 50mas and it is gigantic and very close - most other have less than 1mas which is 1/1000 of our regular sampling rates). Blur by its definition is "how does light from a single point spread around". Star profile contains only light from that star so it is indeed PSF of combined optics / tracking and atmosphere (in fact everything that contributes). Most star profiles are indeed close to Gaussian. Some will say that Moffat is better PSF approximation for astronomy, and it probably is for telescopes that have very good tracking - professional telescopes. Unfortunately, amateur telescopes don't usually have such perfect tracing and because tracking error comes into it - star profile is probably more Gaussian than Moffat - but both are pretty close: FWHM is rather good indicator for the shape of Gaussian - as it is tied to key parameter of the curve via simple relation sigma = FWHM/~2.355. We get different values of FWHM for stars in our image (single sub) - because optics have different aberrations depending on how far away we are from optical axis. Different stellar classes have different amounts of various wavelengths - hot stars are more blue and produce more blue light, while red stars are obviously rich in red wavelengths. Different wavelengths behave differently in atmosphere and also in optics - even with mirrored systems, Airy disk is of different diameter. All of that creates FWHMs of different values across the image, but these differences are very small compared to actual resolution. As we have seen, even sampling rate change of x2 will not have drastic impact on what we see in the image. Small differences in FWHM depending on where in the image we are looking and wavelength distribution of the source don't have drastic impact on resolution. No one ever said - look half of my frame is blurrier than other half - well, maybe Newtonian scope owner without coma corrector might think that but they won't say it out loud . If you are doubtful about FWHM approach - there is a simple way to test it. Take one of your images and measure FWHM. Take one of high resolution images of same object by Hubble space telescope. Scale Hubble image so that it is same resolution as yours and then do gaussian blur on it such that sigma is equal to your FWHM / 2.355. Compare results visually and you should see about same level of blur in both images (above is not strictly correct - as you'll be doing blur on stretched image rather than on linear data - but it is good enough approximation).
  7. Could you post crop of both frames - just a piece so it'll be small upload size - but with enough stars without clipping? I want to measure both with AstroImageJ. I've done similar comparison with actual data and I'd be happy to do it again for you if you wish. Point is in the numbers rather than text - difference between two subs if we know that resolution has been limited by blur of certain FWHM.
  8. I always love to measure things - simple math in this case - take your stack (luminance is fine) while linear and measure FWHM. Divide that with 1.6 and - you get optimum resolution for your image. 0.9"/px is tall order - you need something like 1.44" FWHM. Btw, purely visually - I would say that your crop above is over sampled as well. here is what I would say visually - properly sampled image: Stars really need to be pinpoints. Then again - some people don't mind if image is blurry a bit if they don't have to put too much effort in seeing details.
  9. But look - I just made it even bigger now! All it took was CTRL+ in my browser to do so ...
  10. I've done that and residual is usually smaller than one in thousand or so - and usually contains noise, more than anything else. Here is example done with x2 resolution reduction - strong oversampling case. Original text was sampled at ideal resolution (1"/px, FWHM at 1.6"), then image was created at lower resolution by binning x2 (2"/px) and up scaled back to 1"/px using cubic b-spline resampling. Third image was made as difference of the two. Histogram is shown on the left and it shows aliasing artifacts. Difference is in -0.2-0.2 range and actual histogram of difference is: As you see - standard deviation of residual is 0.0284 - and that is for x2 lower resolution with under sampling. Here is case for oversampling - so ideal resolution is at 2"/px and we have image at 1"/px and 2"/px: Again histogram of the difference: Now values are +/- 0.06 and StdDev is 0.0097. Histogram is not centered and symmetric - because there is probably sub pixel shift due to resampling method used (not very sophisticated - nor did I align subs in any way). As you see - even x2 difference in resolution is likely to cause difference that is buried in noise - less than a few percent if over sampling.
  11. Speed of the comet combined with exposure length? Comet stacking works by rejecting certain pixels from the stack. You can either align subs on comet - stars will trail or on stars - comet will trail. Comet stacking works by doing both - taking image of the comet from stack aligned on comet and stars from stack aligned on stars. Issue is to distinguish what are stars and what is comet - and it does that by examining the pixels in each stack - how much they change between subs. Sometimes, if comet is too slow - there is not much change between star positions in the image and algorithm can tell if that is something to do with comet (it stayed the same with respect to comet) or is it star (moved enough with respect to comet) - so you get above mess. Not sure how you can try to make it better - are there any parameters for stacking that you can tweak?
  12. I found it rather interesting that you both actually saw difference between 0.9" and 1.1". I think that most people would be hard pressed to see difference between double sampling rates - for example 0.8"/px and 1.6"/px. Could you at least tell me what sort of difference did you see? Here is some text that has been blurred so that we hit optimum sampling rate - one has been sampled at 1.1"/px and other at 0.9"/px. Do you see any difference - apart from left image obviously being slightly smaller. To my eye it just seems a bit sharper than image sampled at 0.9"/px (larger one), although smaller letters might be harder to read without glasses
  13. I would also like to see the difference - do you mind showing me the difference?
  14. Let's be constructive. Goto dobsonian - as is will indeed be very poor for DSO imaging. It will be very good for planetary / lunar imaging - if you add planetary camera and a barlow lens. This is technique called lucky imaging and you'll need to do a bit of research on how it is properly done (how to record movie/sequence of subs, how to pre process them and how to stack them and sharpen results after). All of that if you are interested in planetary - and why not? Getting yourself a planetary camera is generally a good step - as it can be used as guide camera later on if and when you decide to move onto DSO imaging. For your budget of £350, you can't do much really. First option would be to purchase star tracker or AzGTI mount. AzGTI mount will need modifications to be placed into EQ mode - which means firmware upgrade and some accessories - like wedge and counter weight shaft + counterweight. It is a bit of DIY. Star Adventurer on the other hand is pretty much self contained package, but it is star tracker and it is meant to hold camera + lens and only smallest of the imaging scope. In either case - there mounts are more suited for camera + lens or very small scope than anything serious. If you want proper mount - well that is sort of over your budget. If you purchase second hand you might be able to get something decent - and maybe even new but it won't have goto. You should be able to fit EQ5 mount + tracking motor into £350. And that is it. Use any of the two with camera+lens to get you started in AP.
  15. Here it is - simple RGB ratio + gamma 2.2 (no color calibration): Quite dull looking? It turns out - most of these galaxies are indeed just yellow / orange-ish. Btw - very nice data set.
  16. I just quoted @CloudMagnet and pointed out what can be seen in that particular processing. I downloaded image and tried to see if I can locate anything - but my best effort did not produce anything worthwhile compared to this image.
  17. This is as good as it will get It shows both Heart and Soul nebulae and double cluster.
  18. What would be too expensive? Here is rather good little scope that will outclass all of the above and it does not cost much: https://www.firstlightoptics.com/maksutov/skywatcher-skymax-90-ota.html It is also compact and easy to carry. Pair that with Az-EQ Avant mount and you have nice combination. If you really want a refractor telescope, then I would say - I think 70/700 mercury probably has decent optics (not 100% sure on that one), however mechanically it is a toy. Focuser is poor 1.25" unit, and OTA is probably fitted with plastic things (like maybe dew shield is plastic - I know Celestron models have plastic dew shield). Here are a few decent contenders for a nice small refractor scope: All rounder, but most expensive: https://www.teleskop-express.de/shop/product_info.php/info/p1151_TS-Optics-70-mm-F6-ED-Travel-Refractor-with-modern-2--RAP-Focuser.html Want low power views? Sure - 2" focuser - just add long FL eyepiece. Want to have a peak at planets and the Moon - sure - just add a barlow and short FL eyepiece and you'll be viewing at x140 in no time. It has retractable dew shield, very nice focuser and it is solidly built. Next contender: http://www.opticstar.com/Run/Astronomy/Astro-Telescopes-Opticstar.asp?p=0_10_1_1_65 90mm F/8.8 achromat. I've heard good things about these scopes. Then there is this one: http://www.opticstar.com/Run/Astronomy/Astro-Telescopes-Opticstar.asp?p=0_10_1_1_54 But it looks it is out of stock for quite some time now. Maybe seeking second hand offer on that one? Maybe this one sports same optics? https://www.teleskop-express.de/shop/product_info.php/info/p7935_TS-Optics-80-600mm-Refractor-Teleskope---optical-tube-with-rings.html However, it seems to have similar mechanics to low cost units. Focuser is metallic but still 1.25" unit. It is really shame that 70/700 does not come with better body and focuser. There is also 70/500 btw - which would be better if you are interested in wide field observing more than high power. ST 80 F/5 is really classical fast achromat that will be also good for low power viewing.
  19. Much of what has been said just simply makes no sense. With regards to SNR estimation - first thing to understand is that there is no single SNR value per image. Signal in the image is not the same (otherwise it would be all gray image without any detail) and part of noise depends on that signal (target shot noise) - so noise is definitively not the same - hence SNR can't be the same. Every pixel has its own SNR that we can estimate with different methods. Standard deviation of set of values that are sampled from a population of measurements of certain value is noise. There are no noise pixels in the image - every pixel has certain value associated with it. That value can be close to real value of signal - small noise or far from real value - large noise. Even if noise is large - it might not matter if signal is also large - we are interested in SNR. In any case - calling some pixels noise pixels makes no sense. Drizzle is in fact sort of interpolation. That might not be immediately obvious - but it is. In fact, mathematically it is close to worst kind of interpolation. First thing to understand is - each pixel of output image gets only subset of values from input stack. If you are stacking 40 subs and you drizzle x2 - on average each output pixel will get only 10 samples. This is because input pixels are "shrunken down" and empty space is created between them. Some of those pixels are mapped to output pixels, but empty space gets mapped to other output pixels - in "this round" - some output pixels receive no value. Stacking 10 subs can't produce same SNR as stacking 40 subs - simple as that. Now second point about interpolation - here is another simple graph that will make obvious what happens: This is very simple example of drizzle mapping in action. Simple shift and no rotation - for simplicity. Black pixels are output pixels and red pixel is one being drizzled. Each of output pixels will get proportional piece of red pixel. 60% of red pixel value will be stacked onto first output pixel and 40% of red pixel value will be stacked onto second output pixel - because 60% of surface of red pixel "falls" onto first output pixel and 40% surface of red pixel falls onto second output pixel. So far so good - it just involves surfaces and there is no "interpolation". But let me ask you - where does center of red pixel fall? Let pixel side be 1 unit long. Center of first output pixel is 0 and center of second output pixel is 1 (since pixel side is 1 unit long). If 60% of surface of red pixel falls onto first output pixel - where does it center lie? It actually has coordinate of 0.4 (all Y coordinates in this case are the same, so we don't write them down). Since distance between output pixel centers is 1 - that means that red pixel center is 0.6 away from second output pixel. Guess what linear interpolation does? It linearly "divides" values based on distance to centers - we have 0.4:0.6 - one pixel gets 40% and other gets 60% - closer gets higher value - so first output pixel gets 60% and second output pixel gets 40%. Hold on, this is exactly the same as above drizzle integration - but that can't be right because "it does not use interpolation at all!". Finally - let's examine two more graphs: First Fourier transform of triangle (that is kernel for linear interpolation): Right graph shows in frequency domain attenuation - or low pass filter shape Yellow graph here shows the same for Lancoz3 resampling method. First one falls faster - making it stronger low pass filter.
  20. Not sure if this version is going to appeal to people - it was more about number crunching than about "artistic impression". I'm quite surprised that it turned so "smokey" and "dust in space" - like (at least to me).
  21. Issue with Mak is that you really can't speed it up significantly. Well, maybe you can but we don't have adequate focal reducer available. It is also baffled "tight" - illuminated field is not very great. At 6" people might start to feel boxed in as that is already 1800mm of focal length. C6 on the other hand can do planets. Maybe not as good as Mak150, but I don't think it will necessarily be much worse. It can image planets - we know that most of planetary imaging is done on large cats. One can purchase this: https://www.teleskop-express.de/shop/product_info.php/info/p11425_Starizona-Night-Owl-2--0-4x-Focal-Reducer---Corrector-for-SC-Telescopes.html and get F/4 imaging "machine" that can illuminate 16mm field - that means ASI183 and ASI533 on 6" aperture and 600mm FL - not bad. It is extremely light scope at 3.7Kg - I bet it can be mounted on AzGTI. If it was reasonably priced - I think it could do well as all around scope for beginners - but it is very expensive (not twice expensive in comparison to Mak150 like 5" counterparts - but still, quite expensive for 6" scope).
  22. What is the rant quota per user? It's been a while since I brushed up on my CoC familiarity - two times a year? three times a year? I feel I'm entitled to one, so here it comes ... What's with SCTs? What could possibly be making them that much more expensive? Celestron C5 - £569 vs SW Mak127 £265 vs Bresser MC-127 £312 Catadioptric - check Spherical primary - check Glass for corrector plate - check Similar tube size - check Funny mirror focusing mechanism that causes image shift - check I don't really see much more craftsmanship involved, if anything - less glass is used for corrector plate (much thinner, isn't it?). All of that must mean twice the price - there is no other explanation ???? /rant over But seriously, this has stopped me from recommending this scope as "do it all" budget option for novice astronomers. Not much available as a mounting option either - well nothing sensible.
  23. I called multiple times for people to do an experiment. I did it once and simple resizing won. In part because DSS threw some artifacts in drizzled image and in general because SNR loss was obvious and there was no gain in resolution in favor of drizzle. Want larger megapixel count in your image? Why don't you simply do one of the two: - take all your calibrated subs prior to stacking and just scale them up x2 or x3 or x4 or even x3.5 - what ever number you choose - and stack them like that. Use a fancy resampling algorithm that produces good results like Lanczos3 or Splines or similar. - Do the same but after you are done stacking. In fact, I think that first approach might even do as much as drizzle in terms of improving resolution (real or not) and it certainly won't lose SNR since it will provide sample for each pixel from each sub instead of spreading pixels around and not covering all the pixels of output with every sub. Back to original idea, I'll propose experiment and someone else can do it with their own data (partly because I have not imaged for ages and in part because I want to exclude any sort of bias on my part - too many parts in there? ). It does not matter if your data is properly sampled or even over sampled - it is rather easy to create under sampled data - just bin it in software (do it after calibration). For our purposes it is just like using larger pixels. Do 3"/px or 4"/px - that way you'll be certain you are under sampling. Experimental protocol: 1. Produce under sampled data by taking your calibrated subs and then binning them. PI has it as integer resample - use average method. 2. Take one sub as reference frame for other subs to be aligned on. 3. Do drizzle stacking x2 without any fancy stuff - just simple average (no weighing of the subs, no sub normalization - just plain average) 4. Take all subs and scale them up x2 using Lanczos3 resampling and then do regular stacking - again just simple average no fancy stuff 5. Do comparison by selecting 2 regions on both stacks. Try to select same regions on both stack (this is why I said take same reference frame as stack should be roughly aligned this way). One region should be pure background - without anything but pure dark background - try not to select stars or anything. Measure standard deviation in this region on both stacks. Second region should be uniform part of target (just take some nebulosity without too much features - end of a spiral arm or galaxy halo or whatever) - measure average pixel value here. Divide respective measures and compare. This will show how SNR fares between two approaches Also measure FWHM of resulting stacks - this will show you what happens with resolution.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.