Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

vlaiv

Members
  • Posts

    13,034
  • Joined

  • Last visited

  • Days Won

    11

Everything posted by vlaiv

  1. No need to use that - use a bit of wave mechanics to describe what is going on - it is fairly "easy" to understand it. It is very similar in nature to double slit experiment - different paths come together at certain places and phase of the light makes it either reinforce itself or destructive interference happens. A bit of math will tell you that you need to integrate over aperture and very soon you'll see that what you derived is actually Fourier transform of aperture. Now you have a tool to examine Airy patterns of different types of aperture (obstructed, with spider support, hexagonal, square, ....). If you go further - you'll also notice that another Fourier transform of particular Airy pattern gives you MTF - this is how airy pattern affects image produced by aperture. This will explain why there is maximum resolving power of a telescope and why unobstructed aperture gives better contrast for visual. There is field of optics called Fourier optics that deals with all of this. https://en.wikipedia.org/wiki/Fourier_optics
  2. You don't really need different magnification as it does not change properties of airy disk unless in relation to something else. Magnification on its own is scale factor - unit conversion - call it what you will - and has absolutely no impact on airy disk unless it is in relation to something else. We use term magnification to signify that is in relation to human vision. We also use term sampling rate to signify that it is in relation to camera sensor. In fact - magnification and sampling rate are two slightly different concepts - first is about angular sizes - magnification specifies ratio of angular sizes of things, while sampling rate is related to projection - ratio of angular size to length or pixel size (which represents unit length) - in linear space.
  3. I think it might be photon shot noise. Eye / brain combination is very good at filtering this out - at very low light level when it should be obvious - we never see it. I think that this mental noise filter is engaged in particular circumstances and it fails in other circumstances - most notably with Ha in bright day - when eye/brain is "not expecting" this to happen as there is plenty of light around. It happened to me once with imaging Ha filter - not solar one, just 7nm wide 1.25" filter - I was in dim room and it was really sunny outside and I looked thru the filter at outside scene. It looked a bit like regular scene (in deep red color) - with old cathode ray tube TV not being tuned type noise superimposed over the scene. Anyway - noise filter thing is the explanation I came up with - I might be wrong and it could be something entirely different - so far that explanation is only feasible I've found so far.
  4. I have couple notes on this one - just random points that ought to be taken into account when taking about such phenomena: - all stars form equal airy disk (this is not strictly true as stars have different spectra and there is dependence of airy disk size on wavelength - but we can say that they are equal for purpose of discussion) as it is property of the telescope not of the star. - at threshold levels of vision, human vision does not quite behave like smooth function - we can say that it is non linear (in fact, it is always non linear in our sensation but this is not what it is meant here - it is non linear in physical response) - hence spreading equally objects of certain brightness will result in change in contrast between them - this is why (together with other factors - like how well we perceive contrast depending on object size) we have magnification that shows particular galaxy the best under given conditions - it darkens background the most and galaxy the least - hence creating best contrast. - we have to observe profile of Airy pattern - it is not flat / uniform - it is rather pointy - and while we can at certain magnification resolve whole disk - with faint stars, parts of airy profile that are bright enough to trigger response might not be resolved and are still point like - this graph can probably explain it better: If we don't see the rest of the airy disk - above top of the peak will not be resolved - even if full disk is.
  5. There are at least two or three different meanings of star being point source. I've mentioned this before - star is a true point source for most intents and purposes. Let's take a solar size star that is very close to earth - let's say 10 Ly. That is 1.4e+6 km at distance of 9.461e+14. Angular size of this object would be 2*arctan( (diameter / 2) / distance ). We can use small angle approximation tan X = X and we can see that angle is hence diameter / distance. This is angle in radians. Angle is 1.47976e-9 or if we convert that into arc seconds - ~ 3.0e-4" - 0.0003" of angular size. This is star that is very close. We can see stars that are hundreds and thousands times more distant. In fact - there has been a hand full of stars that we have resolved: https://en.wikipedia.org/wiki/List_of_stars_with_resolved_images All other remain point sources. Then there is notion of point like source for visual observation - this is what most other participants of this thread assume. And finally there is telescopic image of point source - this is Airy pattern / disk - that depends on telescope aperture (size) and also depends on how good optics of telescope is - (shape). Two different sized scopes will show different star image (size and shape - but shape changes are very small and often difficult to notice even on very high magnifications). Finally - there is atmosphere that arguably has the greatest impact on star shapes in telescope. To recap: - Star is point source due to vast distances involved - Star is "extended" source in telescope due to airy pattern and atmospheric influence - that behaves for the most part to most observers as point source again because of the way eye and camera sensors work
  6. I think that you are looking at it the wrong way: Scope might cost £170 for visual - but it costs £310 for AP (CC price included). Evostar 72 costs £270 for visual, but £455 for AP - you need to add the price of field flattener. There are scopes that you can use "as is" - without additional optics - and yes, you can get one cheap - but it is another level of imaging in terms of resolution and imaging process - something like this: https://www.firstlightoptics.com/ioptron-telescopes/ioptron-photron-6-ritchey-chretien-telescope.html Then there are scopes that are not as cheap - and don't need additional optics, but still keep you in the "ballpark" of 130PDS - something like this: https://explorescientificusa.com/products/152-maksutov-newtonian
  7. First - let's examine airy disk profile: I've marked with red markers size of airy disk and also with yellow markers - points where most of the light of airy disk is concentrated - this contributes to actual amount of light. If you want to be very specific about how much light you want to include - you can do gaussian approximation of airy disk function and then use exact radius depending on amount of light you want contained within. Let's say for sake of argument - that half of the diameter carries the bulk of light from airy disk pattern. That is 0.7". You don't need to do above calculation - it is enough to know that resolving power of human eye is 1 minute of arc. You need x85 magnification to have significant section of airy disk become 1 arc minute. I've already written this above. Focal length of human eye is about 20mm (source: http://labman.phys.utk.edu/phys222core/modules/m8/human_eye.html ) Rod cell is about 2-3um in diameter (source: https://en.wikipedia.org/wiki/Rod_cell and https://www.ncbi.nlm.nih.gov/pubmed/1427131 ) Which means that sampling rate of human eye is about half minute of arc and in turn that gives about 1 minute of arc resolution. In order for main portion of airy disk to be significantly spread over rods - it needs to be about 1.5 arc minutes in diameter - that way we can roughly say it will be spread over 9 instead of 4 cells. It will be dimmer by 9/4 in linear light = x2.25 That is when you switch from x85 power to x130 power in 8" scope. Let's see what magnitude difference that makes. -2.5 * log(2.25) = 0.88 magnitudes of difference. If one had two scopes next to each other - one having eyepiece with x50 and one having eyepiece with x130 magnification - and switched between them - they should be able to see the dimming of the star by almost a magnitude due to spreading of light from star. If you however use a single scope and switch the eyepiece - chances of spotting the difference are slim to none - since contrast of whole view sill be altered and we remember relative brightness differences much more than absolute brightness.
  8. Eye is also a detector like a sensor - and it behaves very similar in some aspects - one is star being resolved. Above given analogy with sensor is valid - as long as one pixel is covering the star - star will not dim as all photons from that star will be captured in the same "bucket" - total number of photons does not change. Similarly - eye has light sensing cells - and as long as star light is hitting one (or in reality few - you can't perfectly align to one cell in most cases) - it will detect all the light and sensation of brightness won't change. This is also thing of physics - light comes in photons and and you can't subdivide it infinitely because of that.
  9. How about this article? https://www.cloudynights.com/articles/cat/user-reviews/24-26-mm-eyepiece-comparison-r2651
  10. I prefer to bin in software - later. There is really no difference with CMOS sensors. In fact - software binning afterwards is better as it gives you more flexibility. Sometimes you don't get full benefit with binning on camera (not hardware - still "software" type binning - but done in firmware of camera) - since it can round things up the way you might not want it to do. Don't know what WBPP script does - I don't use pixinsight.
  11. I'm for binning I always prefer better SNR than empty resolution. It is probably worth binning and you should bin your linear subs after calibration and prior to stacking. Difference on image when show on screen size (fit to screen) is probably going to be rather small. Difference will be most visible when image is shown at 100% zoom level (1:1 zoom - or one screen pixel for one image pixel). Over sampled images just look blurry at that setting and stars look large. Noise looks different also - there is less of it when binned and it is much more fine grained. Overall I prefer images that are properly sampled. You can see what sort of sampling you need for your image if you take FWHM of stars in your image (beware - different software will report different FWHM values - I trust AstroImageJ) and divide value with 1.6 - that will give you sampling rate in arcseconds per pixel that you should use. Depends on what you value in your algorithm. Binning - defined as N x N sum or average has couple of advantages and one disadvantage: - predictable SNR improvement - it is exactly N times - no pixel to pixel correlation - disadvantage is increased pixel blur Second point is rather moot since in amateur setups we can't control our dithers with great precision and we often need to align our frames with sub pixel precision. This means that aligned subs are already subject to resampling. For this reason, I believe slightly different approach should give best results in both SNR and resolution - that is split bin and Lanczos resample for alignment. Instead of doing "local stack" of pixels - or summing / averaging them - one should split each sub in number of subs with smaller resolution - respective pixels that would end up in sum / average should be put in different subs. This really shows that stacking and binning is the same thing - and since subs need to be aligned - Lanczos3 will do good job of preserving both signal and noise statistics with respect to sampling rate. In any case - you can do some experiments with ImageJ to measure impact on resolution and SNR of different methods of changing resolution. - Make 512x512 gaussian noise image with single star being simulated by gaussian blur of one pixel of a certain FWHM (compatible with sampling rates used) - resize using different algorithms - plain bin x2, bilinear resampling, bicubic resampling and other more advanced methods like cubic spline, quintic kernels and such. Measure stddev on flat patch of each and measure FWHM of resulting star profile. Lower FWHM and lower stddev mean - better resolution and better SNR improvement respectively.
  12. I think that you can get there with time and practice. Here is my M57 that I took quite some years ago. It was taken with 5" scope and planetary camera. I think I also used rather short exposures and image is very aggressively processed - but it can show what level of detail you could expect and probably even more than this:
  13. There still seems to be lack of understanding what binning does and how it behaves - but in reality it is pretty much straight forward. We stack images in order to improve SNR. We average our pixels in "stack" direction (each pixel that we average comes from different sub). This results in reduction of "resolution" in "stack direction" - instead of having number of subs we end up with single image. Binning is the same process - but in different "direction". We take 2x2 (or N x N) group of pixels in single sub and we "stack" those together to produce single output pixel. Result is the same - improvement to SNR - exactly the same as regular stacking - and reduction of resulting size of image (but we don't bin to 1x1 image like we do in regular stacking where we average all subs). Stack 64 subs? - improve SNR by 8 (square root of 64). Bin 2x2 - (which is same as stacking 4 pixels) - improve SNR by 2 (again square root of number of pixels stacked sqrt(4) = 2). Same thing. There seems to be lack of understanding of difference between hardware and software binning. They are the same - except "average" step is performed at different stage. It is like having couple of buckets of apples - you can either count apples in each bucket and then sum resulting numbers or you can empty buckets in a really big bucket and then count apples. Imagine you can make error while counting. Counting 4 times is more likely to produce error than counting one time - so it is wiser to transfer all apples to single large bucket before counting. If you can't do that - simply count apples from each bucket and count those and sum your results. This is basically difference between hardware binning and software binning - both produce same SNR improvement except for read noise - where hardware binning produces one "dose" of read noise, while software binning produces two "doses" (for 2x2 bin). There are actually 4 read noise doses involved with software binning - but because of the way noise adds - it turns out that you add x2 read noise level to image itself. If you have 5e read noise per pixel - binning in hardware will still have 5e and binning in software will be same as having 10e. Luckily, CCD sensor that usually have higher read noise - have hardware bin option, while CMOS sensors that can't bin in hardware have low read noise anyway, so increasing their read noise is not that big a deal. Also - read noise in imaging is important when choosing sub duration - hence you can overcome slightly higher read noise in software binning - by using longer subs.
  14. That was my approach as well.
  15. Maybe you have mixed dithering and drizzling? Dithering will sort out walking noise (in conjunction with other algorithms like sigma clip and so on).
  16. Drizzle needs very specific set of conditions to provide benefit. To name a few: - undersampling - precise dither offset In most cases these two are not met in amateur setups and drizzle does not produce wanted results - resolution improvement. Often, smooth stars are being reported as reason to use drizzle - but that can be accomplished by using a bit more sophisticated resampling algorithm - you don't need to drizzle. On the other hand, when drizzling - you are spreading around your samples and lowering SNR. Although it is widely available and often used - I would say that drizzle is one of the least useful tools astro imagers have at their disposal and often misused.
  17. That won't work. You need to know what that option does - and in fact it is pretty clear in your example what the difference is. It is related to quality estimator and stacking. Global will accept / reject whole frame based on quality threshold. Local will accept / reject local area around alignment point (and related square) - thus taking only a portion of sub for a stack - one that is sharp. With global you will stack full frames and with local you will stack pieces of each sub that have good enough quality. Global: - uniform SNR improvement - all stars will be equally tight (both those that have alignment points and those that don't) Local: - varying SNR improvement unless you take very large number of subs and things average out (there is on average same number of good patches all over the image) - you need to cover whole image with alignment points for this to work properly and there needs to be a "feature" around each alignment point that is used for aligning. Empty space creates a point - this is best done on Lunar surface and such - very different tightness of stars - where you have alignment point, you'll have tight stars and good SNR - where you don't have one - stars will be smeared. I would say that best approach for Lucky imaging with small aperture is to do global rejection of subs and then take model that only includes translation and rotation for aligning subs. AS!3 uses local geometry adjustment and I believe that this can be problem in your case. This sort of aligning and stacking deforms each sub to correct for motion of atmosphere. Such motion happens on very short scales - 10ms and less. You are not imaging at such short scales - and that would be next to impossible with 6" of aperture. You might be even using shorter exposure than it is really needed for what you are doing. There is very small difference between 1/3s, 1/2s and 1s exposures for what you are trying to do. Atmospheric movement and distortion happens at scales of 5-6ms. 1/3s exposure will have about 60 of such intervals - things are bound to average out (1/2 will have 100 and 1s will have 200 - not much of a difference - each one will average out). For this reason you can't try to do what planetary imaging does - use even such deformed frames by correcting them back. Only thing that you can do - is select subs that were shot in periods of more stable seeing vs periods of poorer seeing. That and avoid any issues with mount smoothness and guiding. One could in principle write algorithm to use local geometry correction in this case as well - using alignment points - and even alignment points for stars that have poor SNR in single sub - but it would take ages to stack such image. Algorithm would first stack every frame to get good enough SNR to detect each star in the image - and then try go guess where that star is in each individual sub to do local geometry correction.
  18. I just jumped in this discussion again and I'm not quite sure what is being discussed, but star is point like for quite a large span of magnifications. If you want to get somewhat more technical - most people resolve at about 1 minute of arc. Airy disk size for 8" scope is 1.28" - we can say that most of light is concentrated under half of that so about let's say 0.7". We need about x85 magnifi8cation to get into region of 1 arc minute. To start resolving airy disk - you need couple times that. This is why people say - do star testing with x200 or x300 mag. Even at these magnifications - airy disk is still not large enough for eye to see significant dimming. Eye does not work linearly - having half of the light will not make something as half as bright to our eye - this is one of the reasons we have magnitude system that is logarithmic in nature. It is also reason why most people don't notice vignetting up to 50% visually. Yes, you will see star dim when you crank up magnification - but you need to use crazy powers to see it clearly.
  19. I have to be very careful here as there are different answers depending on context. If you increase magnification for anything that has surface - surface brightness with respect to apparent angular size will decrease. On the other hand - surface brightness with respect to actual angular size will not change. What does this mean? Take sky for example - we have sky that has certain brightness - like mag21 - which means magnitude 21 per arc second squared brightness. Increase magnification - it will look darker. Measure it, regardless of magnification or pixel scale - you will always get mag21 per arc second squared. So in absolute terms - brightness does not change. In relative terms it does change - if you spread light over more receptors (eye cone cells or camera pixels) - same amount of brightness gets registered by more receptors - each receptors gets less photons - in that sense - each receptor registers lower signal - but total number of photons remains the same. Makes sense?
  20. It actually is - "true point source" with respect to resolving power of any telescope that we have. In fact maybe for telescope the seize of solar system it would not be point source. This does not mean that star will be point like in a telescope - telescope optics can't resolve something past airy disk sizes - and airy disk size depends on aperture size. It also depends on seeing - in poor seeing threshold star will fade out of view quicker - because combined airy disk of telescope and seeing aberrations spread light too much.
  21. Just couple of notes: Different size of chip (but not significantly), and also different focal length. With Esprit 100 - a lot of FOV will be wasted. One scope has diffraction spikes - one does not (or rather produces). This will cause some issues when processing and with final appearance of the image. Maybe best thing would be to match the FOV and both scopes to be refractors - so you don't have to worry about diffraction spikes. Two Quattro 8s could work - but besides aligning Fovs you would have issue of tube rotations and aligning diffraction spikes as well.
  22. https://iss-sim.spacex.com/ It's not that hard at all
  23. I'm not 100% sure as I've not tried this myself - try following link: https://github.com/vtorkalo/ASCOM.DSLR/raw/master/DSLR.Camera Setup.exe
  24. At first I thought you were asking about planetary imaging so using barlow makes sense, but I'm not sure it will make sense for DSO imaging. Your scope has 480mm focal length (or there about) - with ASI1600 and 3.8um pixel size - that gives you 1.63"/px. That is really about as high as you should go with 80mm scope. Adding barlow will simply capture no additional detail. You can check this by examining your current subs. If you take a sub that you shot with that scope and camera and measure FWHM of stars and it is lower than about 2.5" FWHM - then you can go with higher resolution - else there simply is no point in doing so. In case you still want to use barlow and image at high resolution regardless - this is probably best photographic barlow - not cheap though: https://www.firstlightoptics.com/barlows/baader-vip-modular-2x-barlow-lens-125-and-2.html Element is wide enough to illuminate full frame sensor and there is exact spec on focal length so you can dial in wanted magnification. Barlow element has T2 connection - easy adaptation to camera.
  25. Of course you are making things complicated - you are mixing absolute surface brightness and magnification. Eye responds to relative surface brightness - add magnification and things will become dimmer. Aperture does not change absolute surface brightness but it changes relative surface brightness. I'm also making things complicated - by using terms absolute and relative What I mean is surface brightness per actual sky area vs surface brightness per apparent observing area - or something like that ... getting lost in terminology here, help!
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.