Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

vlaiv

Members
  • Posts

    13,091
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. Not necessarily For every two inches there is about x2.8 price increase. 12" TS would therefore be 2600 * 2.8 = 7280e
  2. Thanks. I'm not 100% sure what you are referring to as differences seen by others? If you mean that they see for example 1.1"/px containing less information than 0.9"/px (which could be true, depending on actual blur, but hardly seen by a human in an image), then I don't think it is due to software. I can't imagine what a piece of software could do to make it so. On the other hand, I do have a candidate explanation why people might think that over sampled images contain more information / detail than they do in reality. It is pretty much the same mechanism that is causing us to see the man on the moon or shapes in the clouds. Our brain is good at spotting patterns. So good that it will see patterns where there are none. It will pick up patterns from the noise - something that resembles something else that we have seen before. Over sampled images are noisy and "more zoomed in". Brain just tried to make sense of it - since it is more zoomed in - it expects to see more detail even if there is none. Noise is interpreted often as sharpness and as detail - because our brain wants to see detail - it expects detail if image is already zoomed in that much. Smooth image that has no noise will look artificial - same image with added noise will look better - although there is no true detail added. Example above is a good one - smooth star looks like very zoomed in and artificial - add a bit of noise - and it much more looks like original star - although both patterns are random and nothing alike - our brain recognizes that randomness as similarity. We often mistake noise for sharpness - because noise has those high frequency components - here is example: Which image looks more blurred? Same image - left one just had some random noise added.
  3. Mount PE needs to be smooth to be able to guide with long exposures like 6 or 8s. I tried that with my Heq5 after belt mod and I've found that at around these exposure lengths PE starts to make my guiding worse - too much time passes between corrections and mount deviates too much. For rough mounts like Heq5 it's best to keep it under 4-5s.
  4. Red arrow shows drop down that you can use to select exposure length (slider is for brightness of the view, I believe - it won't change exposure length, but I'm not sure). Blue arrow shows stats - RA error in pixels (and in arc seconds in parenthesis), DEC error and combined (total) error. You want total error in arc seconds to be as low as possible - but 0.5" is realistic bottom value - it is very unlikely you'll have better value than that on Heq5 (Mesu200 does 0.2" for example - but costs x6-7 more ). This value depends if you entered pixel size and focal length of your guide scope correctly - it's worth double checking those as well.
  5. On average night you can expect 0.6-0.7" RMS - as far as I can tell - you are already there? On a good night - it will go down to 0.5" RMS total - but for that you need to smooth out the seeing - and this is why you want 3s exposure - it will average out seeing but still short enough to pick up any irregularity in mount motion.
  6. What mount is this? Only recommendation that I can give you - use a bit longer exposure. Maybe 3s or 4s. That will smooth out the seeing. Otherwise, that guiding is ok for mount like HEQ5/EQ6 or similar.
  7. You have the data, you already did process at bin x1 or bin x2 - just take those stacks and bin them until you get 2"/px and then process them again. Software binning is available in PI as integer resample (hopefully it works ok) - just select average method. This does warrant additional theoretical explanation. I'll try to keep it at a minimum, but still provide correct and understandable explanation of what is going on. Blur is convolution of function with another function (we will call this other function blur kernel) - this is included for correctness and if you want you can look up convolution as math operation. Blur kernel in our case is well approximated with Gaussian shape - it is point spread function and also a star profile (all three are connected and the same in our case). Convolution in spatial domain is the same as multiplication in frequency domain. In another words - if you blur by some kernel (convolution) - you are in fact using filtering (multiplication of Fourier transforms - multiplication in frequency domain). This is crucial step for understanding what is going on, since convolution is not easy to visualize and analyze - but multiplication is - we are used to it. Another important bit of information is that Fourier transform of Gaussian profile - is another Gaussian profile. This means that our blur is in fact a filter that has Gaussian shape. We now have means to understand what happens with data when it gets blurred. We just need to add some noise into the mix. Here is a screen shot that will be important: I had to do it for my benefit - I was not sure that this will happen (I thought it will), so I needed to check - we want to be right in our explanation. This is just random noise, pure noise (first image - Gaussian noise). Second image is Fourier transform of first image and third image is second image plotted as 2d function. I wanted to show that if you have random noise - it will be equally distributed on all frequencies. Now we have this - this explains all of the above - if we just analyze it right. This represents our filter response in frequency domain - Gaussian (as FT of Gaussian blur kernel) and red represents noise distributed over frequencies. Left on X axis are low frequencies - right on this axis are high frequencies. At 0 this graph is 1 or 100% high. As you go higher in frequencies - value of this graph falls and approaches 0. Remember we are multiplying with this. Number multiplied with 1 gives the same number Number multiplied with less than 1 - say 1/2 gives smaller number Number multiplied with 0 gives 0 (regardless what original number is - we loose its value). At some point, multiplication factor gets very low - like 1% or 0.01 as number - and this means that this frequency and all frequencies above that frequency - are simply very low - in fact, at some point they become lower than that noise floor on the graph. We don't need to consider those frequencies and all frequencies above that frequency - there simply is no meaningful information there any more - it has been filtered out by blur and noise is probably larger than information. SNR at these frequencies is less than 1 (and progressively less as frequencies get higher). Now comes in Nyquist theorem that says - you need to sample x2 per that highest frequency that you want to record. This is how blur size - FWHM relates to sampling rate. Simple as that. We take Gaussian profile of that FWHM - do Fourier transform of it to get another Gaussian - and look at what frequency that other Gaussian falls below some very small value - like 1% - no point in trying to record higher frequencies than that. What does this have to do with SNR and sharpening and all? Let's take another look at above graph: We decided on our sampling rate (red vertical line - we record only frequencies less than that - those that are left of it). Ideal filter would be that red box - all frequencies to our sampling frequency are filtered fully - multiplication with 1 gives back exactly that number and after that frequency - 0, we simply don't care about higher frequencies since we already sampled at our sample rate. However - blur on our image does not act like that. It gradually falls from 1 down to close to 0 - blue line. In the process it attenuates all the frequencies - by multiplying with certain number less than 1. We loose all the information in shaded part of graph. We don't actually loose it - it is there it is just attenuated - multiplied with number less than 1 (height of blue graph). In order to restore full information - we need to multiply each of these frequencies with inverse of number it was originally multiplied. This means that if frequency was multiplied with 0.3 - we need to multiply it back with 1/0.3 (or in another words - divide it with 0.3 - inverse operation of multiplication) to get original number back. That is what sharpening does - and above Gaussian curve explains why we can sharpen - because our blur is not simply frequency cut-off, it is actually gradual filter that slowly reduces higher frequencies - we sharpen by restoring back those frequencies until we reach limit we set by sampling rate. Last bit is to understand how noise is impacted by this - just look at previous filter image - one that shows both Gaussian and noise. Each time we restore certain frequency - we push blue line up back to 1 - we do the same with red line - we also push it up equally (as we make frequency component larger we do so with associated noise - we increase noise on that frequency as well). This is why sharpening is increasing noise in the image, and that is the reason you can sharpen only if you have high enough SNR. If not - you will amplify the noise.
  8. I don't upsample images - I leave them at resolution that is compatible with level of detail that has been captured - or at least I advocate for people to do it like that. What would be the point of upsampling? In one of my posts above I shows how easy it is for anyone viewing the image to upsample it for viewing - just hit CTRL + in the browser and hey presto, you got larger image without additional detail. Why would you make that your processing step when anyone can do it if they choose to do so? If anything, I often bin data if it is over sampled - it improves SNR (removes noise, btw - I added noise above just to show you that it is the same thing - with smooth star it is not obvious right away since original star is noisy and looks a bit different and sometimes it is hard to tell what is detail and what is noise - so I added noise to show you that original image does not contain detail it is just smooth star profile + noise) and it looks nicer. There is additional bonus in all of this that we have not discussed. Even at proper sampling rate - data is still blurred. Because it is blurred we get to sample it at that rate, but we can use frequency restoration technique to sharpen image further and show all the detail there is at that level and make image really sharp. In order to do this - we need very good SNR and one way to get this SNR is to bin in the first place - or not to over sample as that spreads signal and lower SNR. Look at this image captured by Rodd and processed by me (only luminance): here is original thread: And here is processing of the luminance (this is actually a crop of central region): In my book this is what is opposite of empty resolution - fully exploited resolution. This data is also binned x2. If you look at other processing examples - you will find full resolution versions of this image - look at them at 1:1 and you'll see "empty resolution". This data had very high SNR (a lot of hours stacked) and that enabled me to sharpen things up at this resolution to this level. I feel that we are again digressing into what looks nice vs what is actually needed to be recorded in the first place. Math above and examples show that you don't need to do high sampling rate if your FWHM is at certain level. Whether you choose to use high resolution - is up to you really. If you like it like that - larger and all - again up to you and I'm certain there is no right or wrong for that. However there is right sampling rate for certain blur if you want to optimize things.
  9. This gives me completely new perspective on how cheap 4-5" Mak is - unbelievable value.
  10. That is very interesting. Here is profile of the brightest star in Ha crop that you attached above: Simple yet not very precise measurement puts left side of half height at about 12 and right side at about 18 - difference being 6px - same as above measurement in AstroImageJ - which gave around 5.8px to be average FWHM for this particular crop. Ok, I understand that it is hard to see, and I can offer one of three ways of explaining. In fact I'll do two right away. 1. I'll show you 2. I'll provide you with analogy 3. If you want, we can go thru complete math behind it so you can see that it is correct (or maybe we can spot the error if there is one) 1. Let's do demo: This is brightest star in that Ha crop - enlarged x10 - with nearest neighbor resampling used (which is really not resampling - but rather just drawing large squares instead of sample points - this is often reason people think pixels are little squares - other is that camera pixels are indeed almost squares - not quite but almost). Indeed, when we bin 4x4 - we get this: Almost only 3x3 pixels (not quite, some surrounding pixels as well have some value). Look what we get when we enlarge back this little star: We get that same star - just minus noise. Look how big it is - left one has 14px across and right one has 14px across - same diameter. Look what happens when I add random noise to right image - just some Gaussian noise: Yes - that star on the right was actually binned x4 and it now looks the same after we enlarged it and then added some noise that was removed when we binned it by x4 (x4 SNR improvement). 2. Let me try to use some analogy. Take a look at regular linear function - just a straight line. You can take 20 points on it and record all 20. You don't need all 20 of them to exactly describe that straight line - you need only two. You can throw away 18 points and you won't loose any precision - you'll still be able to draw that exact straight line. You won't have all the points on it - but you'll be able to calculate and draw any point on it. Line is highly regular - it has no bumps - no details, it is just straight - and for that reason - it requires only 2 points to know every point on that line. Let's go a bit more complex - let's take quadratic function. Here you no longer can know quadratic function with only 2 points. You can still record 50 or 100 points - but you don't need all of those points. You in fact need only 3 points to record quadratic function perfectly. You can throw away other points. Now we have introduced a bit of curvature - we no longer have a straight line - we can have one bump. It turns out that for any polynomial function there is exact number of points that you need to record that function. Larger the polynomial degree - more points you need, but also more "intricate" (or wavy) that function is. There is relationship between smoothness and waviness (if there is such word) of function to number of points that you need to record that function. This happens with images as well - or particularly with blurred images. Blur adds certain level of smoothness to image - and image can be viewed as 2d function where pixel intensity is value of that function at (X, Y) - pixel coordinates. Pixel becomes single point in 2d plain (rather than being a square). There is also certain relationship like above with polynomial functions - depending how smooth 2d function is - you need certain number of points to record it. More smooth it is - less points you need to record it. FWHM of star is telling you how smooth the image is - it shows you blur kernel that has been used to blur the image (more or less - see above discussion on how well star PSF describes blur of the image). It tells you how smooth the function is and from that you can determine how many points you need to record the image. Last part of analogy is - if you have 2 points on a line you can know every other point on a line. Here it goes - if you sample at certain resolution - you can restore image at higher resolution - you'll know value of smooth function at higher resolution -but it will not contain detail because detail was not there in the first place when you recorded it - blur was such that it smoothed out detail on that scale. 3. I won't go into complete math here unless you want me to, but just wanted to point out that it will require a bit of Nyquist theorem, a bit of Fourier transform - basic knowledge of Gaussian distribution and such - and we get basically same thing as with polynomials - we get rule on how much sampling points we need to use (sampling rate) on 2d function with certain smoothness (band limited signal) to completely record it. We can also discuss noise and how it behaves in all of that.
  11. Here is comparison that I did with text - only this time applied on OIII sub that you supplied: I used simple bicubic interpolation (nothing fancy it was easier to control exact position) to scale up. This is stretched difference and histogram (display range is from -0.003 to 0.003 and image was scaled to 0-1 range): Not much difference between these two images. I would bin x4 and leave as is - upsampling will produce same result that we discussed above - empty resolution. You can bin x2 everything and then in the end decide if you want to bin another x2 in software or if you have really good SNR - you can do good and careful sharpening of your data.
  12. Well, here it goes: 1. It would probably be beneficial to increase your exposure as background will drop by x4 if barlow is x2. How much? That depends how close are you now to ideal sub length. If you are spot on - use x4 longer subs. This is not necessary and in principle you can even keep your current exposure length but your final SNR will suffer a bit. 2. Yes, you will need x4 total exposure to compensate for x2 barlow to get same SNR. 3. You probably don't want to do this barlow thing. Most people are oversampling as is (very few are under sampled or properly sampled) and adding barlow will only create lower SNR per sub and in total (for same imaging time - hence recommendation above to increase both) without any real resolution benefit. What is your current sampling rate and what is your average FWHM? If going by equipment in your signature 200P and QHY8L - then there is some sense in adding x2 barlow if your guiding is good. But it will require special processing and you won't get much more close up. You are probably using linear debayering at the moment and barlow will be useful only if you use super pixel mode - which will result in same sampling rate. I'm tempted to say - don't bother with barlow.
  13. I suspected that PI got silly results - and indeed it seems to be the case. I'm rather reluctant to speak against PI because it will look like I'm bad mouthing their developers, but on several occasions I found that their algorithms and explanations don't make complete sense (photometric color calibration, their explanation for drizzle, their SNR estimation, ...). In any case, here is what AstroImageJ reports for some stars in Ha image: 5.81, 5.83, 5.72, 5.75, 5.86 - let's call that average of 5.8px or at resolution of 0.52"/px that gives FWHM of 3.016" or 3". You should be in fact bin to x4 rather than x2 as that will give you 2.08" which is closer to real resolution of 1.885"/px that this frame supports. OIII sub has values of 3.83, 3.85, 3.89, 3.5, 3.82, 3.7, 3.76 - I would again say that average is somewhere around 3.75px (maybe 3.8). This time we are sampling at 1.04"/px so resulting FWHM is 3.9". It is to be expected to 500nm be worse than 656 due to atmosphere. Ideal sampling rate here again is 2"/px as 3.9 / 1.6 = 2.4375 (actually it would be bin x5 as is closer with 2.6"/px)
  14. You are quite correct. In fact there are several scenarios to consider there. First, let me just say that star FWHM is very good indicator of captured resolution in well corrected systems. This means stars pretty much the same on axis and in far corners and color correction that is apochromatic. Also, system needs to be diffraction limited across range of wavelengths (this pretty much excludes achromats unless they are crazy slow). In the case we don't have above conditions met - different things might happen. One images with Ha filter and said scope - star FWHM matches resolution of Ha nebula. One images with lum filter - some stars will have larger FWHM than others - depends on star type. In general, red stars will have FWHM that is better match to Ha resolution than blue stars. However - we assume perfect focus here. What if focus was done using FWHM as a measure on blue hot star? Then Ha will be out of focus and resolution will suffer for that - FWHM is then again better match to Ha resolution although FWHM suffers from CA and Ha suffers from slight defocus. In fact, interesting experiment can be done - take any scope and OSC sub and debayer it using some simple technique like super pixel or splitting. Measure FWHM of each channel for single star - you will find them different. Could be that I accidentally swapped blue/red as I was not paying attention to bayer matrix orientation (green is easy - there are two of them so you can't miss). Scope is TS 80 F/6 APO, and sampling rate is around 2.6"/px (1.3"/px when raw, but this is every other pixel of bayer matrix). FWHM varies by 20% - but you can't really see that in the image. That is too small variation between red and blue to be noticeable - it might be that sub to sub variation in FWHM is as much as that. And this is well corrected scope, Strehl 0.95 in green and red and over 0.8 in blue.
  15. In principle - there is little doubt that it is. Stellar objects are far enough to be really considered point sources in our case (even closest stars have diameter in mas - milliarcseconds, for example Betelgeuse has 50mas and it is gigantic and very close - most other have less than 1mas which is 1/1000 of our regular sampling rates). Blur by its definition is "how does light from a single point spread around". Star profile contains only light from that star so it is indeed PSF of combined optics / tracking and atmosphere (in fact everything that contributes). Most star profiles are indeed close to Gaussian. Some will say that Moffat is better PSF approximation for astronomy, and it probably is for telescopes that have very good tracking - professional telescopes. Unfortunately, amateur telescopes don't usually have such perfect tracing and because tracking error comes into it - star profile is probably more Gaussian than Moffat - but both are pretty close: FWHM is rather good indicator for the shape of Gaussian - as it is tied to key parameter of the curve via simple relation sigma = FWHM/~2.355. We get different values of FWHM for stars in our image (single sub) - because optics have different aberrations depending on how far away we are from optical axis. Different stellar classes have different amounts of various wavelengths - hot stars are more blue and produce more blue light, while red stars are obviously rich in red wavelengths. Different wavelengths behave differently in atmosphere and also in optics - even with mirrored systems, Airy disk is of different diameter. All of that creates FWHMs of different values across the image, but these differences are very small compared to actual resolution. As we have seen, even sampling rate change of x2 will not have drastic impact on what we see in the image. Small differences in FWHM depending on where in the image we are looking and wavelength distribution of the source don't have drastic impact on resolution. No one ever said - look half of my frame is blurrier than other half - well, maybe Newtonian scope owner without coma corrector might think that but they won't say it out loud . If you are doubtful about FWHM approach - there is a simple way to test it. Take one of your images and measure FWHM. Take one of high resolution images of same object by Hubble space telescope. Scale Hubble image so that it is same resolution as yours and then do gaussian blur on it such that sigma is equal to your FWHM / 2.355. Compare results visually and you should see about same level of blur in both images (above is not strictly correct - as you'll be doing blur on stretched image rather than on linear data - but it is good enough approximation).
  16. Could you post crop of both frames - just a piece so it'll be small upload size - but with enough stars without clipping? I want to measure both with AstroImageJ. I've done similar comparison with actual data and I'd be happy to do it again for you if you wish. Point is in the numbers rather than text - difference between two subs if we know that resolution has been limited by blur of certain FWHM.
  17. I always love to measure things - simple math in this case - take your stack (luminance is fine) while linear and measure FWHM. Divide that with 1.6 and - you get optimum resolution for your image. 0.9"/px is tall order - you need something like 1.44" FWHM. Btw, purely visually - I would say that your crop above is over sampled as well. here is what I would say visually - properly sampled image: Stars really need to be pinpoints. Then again - some people don't mind if image is blurry a bit if they don't have to put too much effort in seeing details.
  18. But look - I just made it even bigger now! All it took was CTRL+ in my browser to do so ...
  19. I've done that and residual is usually smaller than one in thousand or so - and usually contains noise, more than anything else. Here is example done with x2 resolution reduction - strong oversampling case. Original text was sampled at ideal resolution (1"/px, FWHM at 1.6"), then image was created at lower resolution by binning x2 (2"/px) and up scaled back to 1"/px using cubic b-spline resampling. Third image was made as difference of the two. Histogram is shown on the left and it shows aliasing artifacts. Difference is in -0.2-0.2 range and actual histogram of difference is: As you see - standard deviation of residual is 0.0284 - and that is for x2 lower resolution with under sampling. Here is case for oversampling - so ideal resolution is at 2"/px and we have image at 1"/px and 2"/px: Again histogram of the difference: Now values are +/- 0.06 and StdDev is 0.0097. Histogram is not centered and symmetric - because there is probably sub pixel shift due to resampling method used (not very sophisticated - nor did I align subs in any way). As you see - even x2 difference in resolution is likely to cause difference that is buried in noise - less than a few percent if over sampling.
  20. Speed of the comet combined with exposure length? Comet stacking works by rejecting certain pixels from the stack. You can either align subs on comet - stars will trail or on stars - comet will trail. Comet stacking works by doing both - taking image of the comet from stack aligned on comet and stars from stack aligned on stars. Issue is to distinguish what are stars and what is comet - and it does that by examining the pixels in each stack - how much they change between subs. Sometimes, if comet is too slow - there is not much change between star positions in the image and algorithm can tell if that is something to do with comet (it stayed the same with respect to comet) or is it star (moved enough with respect to comet) - so you get above mess. Not sure how you can try to make it better - are there any parameters for stacking that you can tweak?
  21. I found it rather interesting that you both actually saw difference between 0.9" and 1.1". I think that most people would be hard pressed to see difference between double sampling rates - for example 0.8"/px and 1.6"/px. Could you at least tell me what sort of difference did you see? Here is some text that has been blurred so that we hit optimum sampling rate - one has been sampled at 1.1"/px and other at 0.9"/px. Do you see any difference - apart from left image obviously being slightly smaller. To my eye it just seems a bit sharper than image sampled at 0.9"/px (larger one), although smaller letters might be harder to read without glasses
  22. I would also like to see the difference - do you mind showing me the difference?
  23. Let's be constructive. Goto dobsonian - as is will indeed be very poor for DSO imaging. It will be very good for planetary / lunar imaging - if you add planetary camera and a barlow lens. This is technique called lucky imaging and you'll need to do a bit of research on how it is properly done (how to record movie/sequence of subs, how to pre process them and how to stack them and sharpen results after). All of that if you are interested in planetary - and why not? Getting yourself a planetary camera is generally a good step - as it can be used as guide camera later on if and when you decide to move onto DSO imaging. For your budget of £350, you can't do much really. First option would be to purchase star tracker or AzGTI mount. AzGTI mount will need modifications to be placed into EQ mode - which means firmware upgrade and some accessories - like wedge and counter weight shaft + counterweight. It is a bit of DIY. Star Adventurer on the other hand is pretty much self contained package, but it is star tracker and it is meant to hold camera + lens and only smallest of the imaging scope. In either case - there mounts are more suited for camera + lens or very small scope than anything serious. If you want proper mount - well that is sort of over your budget. If you purchase second hand you might be able to get something decent - and maybe even new but it won't have goto. You should be able to fit EQ5 mount + tracking motor into £350. And that is it. Use any of the two with camera+lens to get you started in AP.
  24. Here it is - simple RGB ratio + gamma 2.2 (no color calibration): Quite dull looking? It turns out - most of these galaxies are indeed just yellow / orange-ish. Btw - very nice data set.
  25. I just quoted @CloudMagnet and pointed out what can be seen in that particular processing. I downloaded image and tried to see if I can locate anything - but my best effort did not produce anything worthwhile compared to this image.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.