Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

vlaiv

Members
  • Posts

    13,032
  • Joined

  • Last visited

  • Days Won

    11

Everything posted by vlaiv

  1. Want to go bit by bit and see where you get stuck? From a point (for our purposes this can be a star) that is very distant (like really distant) incoming "rays" of light are parallel. Do you understand why is this?
  2. Here it is - this image explains it all. If you have a star / point at some angle Alpha to optical axis, following will happen: - all rays from that point will be parallel before they reach aperture - same angle - after objective they will start to converge and finally converge at focal point - all light from original star falls into single point on focal plane - this is why star is in focus on camera sensor (provided it is focused well) and it also means that field stop won't remove any light - it only limits angle that can be seen as bigger angle means point on focal plane is further away from center. - then rays start to diverge (just happily go on their own way and since they came to a point they continue now to spread) - eyepiece catches those rays and makes them parallel again. Few things to note - angle is now different - that is magnification. All parallel rays occupy certain "circle" - that was aperture earlier and now it is exit pupil. Ratio of angles and ratio of sizes of these pupils is magnification. - Eye is same thing as telescope - it is device that again focuses parallel rays. Thus field stop can't act as aperture stop because all rays from "aperture" have been squeezed into single point on focal plane.
  3. I'll be interested in this thread too so replying in part to follow and in part to say that I'm certain single photon does not contain full information about an object. It would be a good start to define what is full information about an object - and even if there is such a thing
  4. Other graph with my added red line is why planetary imagers don't worry about CO and the fact that human eye/brain can't sharpen is why visual observer prefer small CO. There is no information adding in there. There is only information removing - and that is the point when graph line hits 0. Graph actually reads as follows: For a given frequency (X axis) - image that is observed thru a telescope and decomposed into frequencies (Fourier transform) - will have that particular frequency attenuated certain amount. It's like someone, selectively per frequency, put some sort of ND filter and removed percentage of light on that frequency. What you end up is a bit less intense light at that frequency. In fact - height of a graph shows what percent of light remains. As long as there is some light and as long as you know this curve (or can guess it) - you can do inverse on recorded image - you can take that frequency and multiply it with inverse number from graph. If a frequency was halved - then multiply it with 2 and you'll get same intensity as before. Only place you can't do that is if you multiply something with 0. That is information removal because I can't use anything to get original back. X * 0 = 0 => X = 0/0 but division with zero is undefined so X can be anything - information on what was the value of X is forever lost. In classical picture - you can restore image fully. In quantum picture - you can only restore image up to a point since there is uncertainty involved and that uncertainty is noise. When signal is very low - SNR is very low and amplifying that signal won't increase SNR - it will remain the same. For that reason above red curve that I draw is not rectangle - it is still a bit curved - you can't beat the noise. More you stack and better SNR you have - more sharpening you can perform - but there is always a limit - or rather two part limit - one is information loss - you can't restore information that has been lost due to multiplication with 0 and you can't restore information / signal that has poor SNR (is attenuated so much that noise is bigger than signal and SNR is <1 - in fact SNR needs to be >3 in order to start recognizing stuff). Makes sense?
  5. There are two things that I would like to address here. Mr Peach is right that astrophotography can resolve more detail on planets - but not for reasons listed. First - let's address what resolve means: In astronomy: "separate or distinguish between (closely adjacent objects)". Resolving of Encke Division is not really resolving at all. You are resolving two pieces of ring A and not resolving division. Similarly you are resolving two stars. I'm not talking here about light or lack of light - that is not the point. You can still resolve two dark features - but you need to distinguish between two dark features. If there were two divisions next to each other - and you record two division - then you resolved two divisions. Recording contrast drop from Encke division is not resolving - in the same sense that seeing single star instead of two stars is not resolving that star. Maybe there is 50 of stars in there? Maybe it is 10000 stars. How can we tell? We can't because we did not resolve it. Once you resolve pair of stars - you can tell - there is at least two stars there. Maybe there is more - but we did not resolve them. In that sense - telescopes don't actually resolve Encke division - but they do record it - in same sense as you'll see a star if you observe double star but don't resolve it. Now onto other part - photography will record / resolve more than human eye can resolve with same telescope. This is due to two things - first is "Frame rate" and atmosphere. We look at movies at 30fps and can't tell it is series of single frames in succession - this tells you that our eye/brain "exposure" time is at least 1/30s. In fact - some people can see faster than this - there is anecdote that someone saw M1/Crab pulsar pulsate in large telescope. Person was a pilot and could tell the difference to atmospheric influence. Crab pulsar pulsates with 33.5ms - which means that light and dark each last for half of that. Some people can see flicker at 30fps. In any case - exposures for planetary photography are often 5-6ms. That is much faster and it is used to freeze the seeing. In another words - Human eye sees more atmospheric motion blur than camera due to "exposure" length. Second thing is that images of planets are processed - contrast is enhanced and sharpening is performed. Detail is really about contrast. For that reason resolving power is defined with two high contrast features - black sky and very bright stars. Sharpening can sharpen telescope optics as well. That is something human eye/brain can't do (efficiently? I'm sure there is some sharpening involved - but not the way we think of it - brain does all sorts of funny things to image that we see). Here is another screen shot from Mr Peach's website: That talks about sharpness of optics. This is MTF of a telescope (with different levels of spherical aberration in this case). What it does not tell is that image processing or sharpening in particular tends to do this: This graph shows how much detail looses contrast. Once this line reaches zero - no more detail can be seen as all contrast has been lost. If you look at obstructed telescope diagram vs unobstructed - you'll see something like this: More central obstruction you add - more "dip" there is in this curve - more contrast is lost. This is why we say that clear aperture gives best contrast to the image. Now back to sharpening - sharpening just straightens this curve and restores even more contrast and detail that scope can deliver to human eye that can't sharpen. How much you can restore this curve - depends on how much noise there is in the image because as you raise the curve - you raise the noise as well. Important point - once curve reaches 0 - there is no straightening it up - zero just means zero - information lost, no way to recover it by sharpening (or in math terms - any number times 0 is 0 - you can't guess original number you multiplied with zero if your result is zero - it can be any number).
  6. Very interesting point about resolution. I wondered that often in another context - secondary mirror on folded design scopes - like Mak. How does it impact resolution. For human eye, we can do the math. In fact, I've seen math done and it matches well with experience. Resolution of human eye is about 1 arc minute. If you take size of photo receptor cells and their density and do airy disk from regular eye pupil of 5mm - you get the same number - resolution should be 1 arc minute. This means that we have diffraction limited eyesight
  7. Mirrors and gentle use of delicate blower at their finest
  8. Not sure what is going on - but it looks scary
  9. I don't have said tripod, but I suspect - all you need to do is press on tray until that central thing from spider raises enough and clears the tray so tray can be rotated - then rotate it clockwise until "ears" of the tray lock into latches on spider?
  10. Most of programs will let you input new size of image as percentage of old - so use 360% in that case. Mind you - 3.6 is only for 600dpi and 18" if diameter in pixels of settings circles is 3000 - you need to measure it in pixels using measure tool. What software are you using? You can use something simple as IrfanView to do everything - measure circle diameter and rescale the image.
  11. I might be wrong but this is how I would try. Measure how many pixels you have for diameter of template - what ever you want to be 18". Let's say it is 3000px and you want that to be 18". Put resolution of printer to 600dpi - that is standard and most printers will happily print that. 18" is therefore 18 * 600 = 10800 points. You need to enlarge your image by factor of 10800/3000 = x3.6 or 360% in size to match. If you are going to print it yourself out of smaller bits - just take couple of crops of enlarged image in Photoshop or similar - and print those. Just make sure to tell printer you want 600dpi. (you can do above with other dpi setting - like 300dpi or 1200dpi - just follow the same math but change the numbers).
  12. Very nice! I was rather surprised to find out that there is simple mathematical solution for diffraction limited system. I did simulations and came up with very similar figure. I'm talking about planetary critical sampling rate, or this part in post you linked to: Results of my simulation gave 4.8 samples per Airy diameter - or 2.4 per Airy radius. Here is thread that I made about that some time ago: However, there is a problem with this statement: I don't really understand what blur diameter is supposed to be. If you follow up above thread - and I think I written about it elsewhere - I use the same approach to reach FWHM/1.6 figure. Only difference being that I don't clamp with MFT - as it is significantly "smaller" compared to seeing blur. Most telescopes have resolving power far greater than atmosphere allows in long exposure - thus it is not necessary to take into account that resolution as we will stop much sooner.
  13. As far as I remember - it should switch to sidereal+PEC - it is one of tracking modes?
  14. I think that again we got derailed in what we are trying to accomplish here, and I would like to point out few things: 1. Proposal for split image processing approach was to show how much difference if any there is in detail - in support of above theoretical approach and because you mentioned that you actually see the difference - that would give us the chance to inspect that difference in detail 2. I'm not trying to push a certain approach on you. If you feel comfortable doing it like you have done so far and are happy just continue to do so. I would personally bin the image and process it like that and would leave it at said resolution. I would not upsample it back, as I think there is no point in doing so. 3. In my view drizzling is not going to produce anything sensible in this case (but that is just my view).
  15. I have no idea - not much difference to my eye in these two images. Maybe just a bit of processing - not equal stretch. Bottom one has some areas darker and hence a bit more contrast. I wanted to recommend one approach that will take processing out of the equation - it's a bit more work for you, but I wonder what results will be. Take binned / upsampled image and align it to original image - while still linear (both images). Select half of binned / upsampled image and simply paste over original image - thus making "split screen" of linear images. Then just process that. That way we can be certain that any stretching and denoising and sharpening will be applied exactly the same to both images.
  16. I'm having trouble distinguishing which one is which? Which one is binned then up sampled and which one is original?
  17. Not necessarily For every two inches there is about x2.8 price increase. 12" TS would therefore be 2600 * 2.8 = 7280e
  18. Thanks. I'm not 100% sure what you are referring to as differences seen by others? If you mean that they see for example 1.1"/px containing less information than 0.9"/px (which could be true, depending on actual blur, but hardly seen by a human in an image), then I don't think it is due to software. I can't imagine what a piece of software could do to make it so. On the other hand, I do have a candidate explanation why people might think that over sampled images contain more information / detail than they do in reality. It is pretty much the same mechanism that is causing us to see the man on the moon or shapes in the clouds. Our brain is good at spotting patterns. So good that it will see patterns where there are none. It will pick up patterns from the noise - something that resembles something else that we have seen before. Over sampled images are noisy and "more zoomed in". Brain just tried to make sense of it - since it is more zoomed in - it expects to see more detail even if there is none. Noise is interpreted often as sharpness and as detail - because our brain wants to see detail - it expects detail if image is already zoomed in that much. Smooth image that has no noise will look artificial - same image with added noise will look better - although there is no true detail added. Example above is a good one - smooth star looks like very zoomed in and artificial - add a bit of noise - and it much more looks like original star - although both patterns are random and nothing alike - our brain recognizes that randomness as similarity. We often mistake noise for sharpness - because noise has those high frequency components - here is example: Which image looks more blurred? Same image - left one just had some random noise added.
  19. Mount PE needs to be smooth to be able to guide with long exposures like 6 or 8s. I tried that with my Heq5 after belt mod and I've found that at around these exposure lengths PE starts to make my guiding worse - too much time passes between corrections and mount deviates too much. For rough mounts like Heq5 it's best to keep it under 4-5s.
  20. Red arrow shows drop down that you can use to select exposure length (slider is for brightness of the view, I believe - it won't change exposure length, but I'm not sure). Blue arrow shows stats - RA error in pixels (and in arc seconds in parenthesis), DEC error and combined (total) error. You want total error in arc seconds to be as low as possible - but 0.5" is realistic bottom value - it is very unlikely you'll have better value than that on Heq5 (Mesu200 does 0.2" for example - but costs x6-7 more ). This value depends if you entered pixel size and focal length of your guide scope correctly - it's worth double checking those as well.
  21. On average night you can expect 0.6-0.7" RMS - as far as I can tell - you are already there? On a good night - it will go down to 0.5" RMS total - but for that you need to smooth out the seeing - and this is why you want 3s exposure - it will average out seeing but still short enough to pick up any irregularity in mount motion.
  22. What mount is this? Only recommendation that I can give you - use a bit longer exposure. Maybe 3s or 4s. That will smooth out the seeing. Otherwise, that guiding is ok for mount like HEQ5/EQ6 or similar.
  23. You have the data, you already did process at bin x1 or bin x2 - just take those stacks and bin them until you get 2"/px and then process them again. Software binning is available in PI as integer resample (hopefully it works ok) - just select average method. This does warrant additional theoretical explanation. I'll try to keep it at a minimum, but still provide correct and understandable explanation of what is going on. Blur is convolution of function with another function (we will call this other function blur kernel) - this is included for correctness and if you want you can look up convolution as math operation. Blur kernel in our case is well approximated with Gaussian shape - it is point spread function and also a star profile (all three are connected and the same in our case). Convolution in spatial domain is the same as multiplication in frequency domain. In another words - if you blur by some kernel (convolution) - you are in fact using filtering (multiplication of Fourier transforms - multiplication in frequency domain). This is crucial step for understanding what is going on, since convolution is not easy to visualize and analyze - but multiplication is - we are used to it. Another important bit of information is that Fourier transform of Gaussian profile - is another Gaussian profile. This means that our blur is in fact a filter that has Gaussian shape. We now have means to understand what happens with data when it gets blurred. We just need to add some noise into the mix. Here is a screen shot that will be important: I had to do it for my benefit - I was not sure that this will happen (I thought it will), so I needed to check - we want to be right in our explanation. This is just random noise, pure noise (first image - Gaussian noise). Second image is Fourier transform of first image and third image is second image plotted as 2d function. I wanted to show that if you have random noise - it will be equally distributed on all frequencies. Now we have this - this explains all of the above - if we just analyze it right. This represents our filter response in frequency domain - Gaussian (as FT of Gaussian blur kernel) and red represents noise distributed over frequencies. Left on X axis are low frequencies - right on this axis are high frequencies. At 0 this graph is 1 or 100% high. As you go higher in frequencies - value of this graph falls and approaches 0. Remember we are multiplying with this. Number multiplied with 1 gives the same number Number multiplied with less than 1 - say 1/2 gives smaller number Number multiplied with 0 gives 0 (regardless what original number is - we loose its value). At some point, multiplication factor gets very low - like 1% or 0.01 as number - and this means that this frequency and all frequencies above that frequency - are simply very low - in fact, at some point they become lower than that noise floor on the graph. We don't need to consider those frequencies and all frequencies above that frequency - there simply is no meaningful information there any more - it has been filtered out by blur and noise is probably larger than information. SNR at these frequencies is less than 1 (and progressively less as frequencies get higher). Now comes in Nyquist theorem that says - you need to sample x2 per that highest frequency that you want to record. This is how blur size - FWHM relates to sampling rate. Simple as that. We take Gaussian profile of that FWHM - do Fourier transform of it to get another Gaussian - and look at what frequency that other Gaussian falls below some very small value - like 1% - no point in trying to record higher frequencies than that. What does this have to do with SNR and sharpening and all? Let's take another look at above graph: We decided on our sampling rate (red vertical line - we record only frequencies less than that - those that are left of it). Ideal filter would be that red box - all frequencies to our sampling frequency are filtered fully - multiplication with 1 gives back exactly that number and after that frequency - 0, we simply don't care about higher frequencies since we already sampled at our sample rate. However - blur on our image does not act like that. It gradually falls from 1 down to close to 0 - blue line. In the process it attenuates all the frequencies - by multiplying with certain number less than 1. We loose all the information in shaded part of graph. We don't actually loose it - it is there it is just attenuated - multiplied with number less than 1 (height of blue graph). In order to restore full information - we need to multiply each of these frequencies with inverse of number it was originally multiplied. This means that if frequency was multiplied with 0.3 - we need to multiply it back with 1/0.3 (or in another words - divide it with 0.3 - inverse operation of multiplication) to get original number back. That is what sharpening does - and above Gaussian curve explains why we can sharpen - because our blur is not simply frequency cut-off, it is actually gradual filter that slowly reduces higher frequencies - we sharpen by restoring back those frequencies until we reach limit we set by sampling rate. Last bit is to understand how noise is impacted by this - just look at previous filter image - one that shows both Gaussian and noise. Each time we restore certain frequency - we push blue line up back to 1 - we do the same with red line - we also push it up equally (as we make frequency component larger we do so with associated noise - we increase noise on that frequency as well). This is why sharpening is increasing noise in the image, and that is the reason you can sharpen only if you have high enough SNR. If not - you will amplify the noise.
  24. I don't upsample images - I leave them at resolution that is compatible with level of detail that has been captured - or at least I advocate for people to do it like that. What would be the point of upsampling? In one of my posts above I shows how easy it is for anyone viewing the image to upsample it for viewing - just hit CTRL + in the browser and hey presto, you got larger image without additional detail. Why would you make that your processing step when anyone can do it if they choose to do so? If anything, I often bin data if it is over sampled - it improves SNR (removes noise, btw - I added noise above just to show you that it is the same thing - with smooth star it is not obvious right away since original star is noisy and looks a bit different and sometimes it is hard to tell what is detail and what is noise - so I added noise to show you that original image does not contain detail it is just smooth star profile + noise) and it looks nicer. There is additional bonus in all of this that we have not discussed. Even at proper sampling rate - data is still blurred. Because it is blurred we get to sample it at that rate, but we can use frequency restoration technique to sharpen image further and show all the detail there is at that level and make image really sharp. In order to do this - we need very good SNR and one way to get this SNR is to bin in the first place - or not to over sample as that spreads signal and lower SNR. Look at this image captured by Rodd and processed by me (only luminance): here is original thread: And here is processing of the luminance (this is actually a crop of central region): In my book this is what is opposite of empty resolution - fully exploited resolution. This data is also binned x2. If you look at other processing examples - you will find full resolution versions of this image - look at them at 1:1 and you'll see "empty resolution". This data had very high SNR (a lot of hours stacked) and that enabled me to sharpen things up at this resolution to this level. I feel that we are again digressing into what looks nice vs what is actually needed to be recorded in the first place. Math above and examples show that you don't need to do high sampling rate if your FWHM is at certain level. Whether you choose to use high resolution - is up to you really. If you like it like that - larger and all - again up to you and I'm certain there is no right or wrong for that. However there is right sampling rate for certain blur if you want to optimize things.
  25. This gives me completely new perspective on how cheap 4-5" Mak is - unbelievable value.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.