Jump to content

vlaiv

Members
  • Posts

    13,263
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. Depends what your priorities are. EQ5 is the most stable of the three - and you want stable platform for imaging. Each of these is really a compromise when doing astrophotography with a telescope like 130PDS. The mount you should be looking at is this one: https://www.firstlightoptics.com/equatorial-astronomy-mounts/skywatcher-heq5-pro-synscan.html but that mount is often perceived as too expensive and that is one reason why people decide for lighter mount - one of those listed by you. Another reason why you might choose lighter mount is portability. If you often have to travel to dark location to image - you'll compromise and take lighter mount although it's not as stable as HEQ5 for example. EQ3 is barely adequate for imaging in my view. I would use it with very short FL scope for wide field imaging. EQ5 would be really minimum for longer FL scope like 130PDS/150PDS or refractors up to 100mm. EQ35 was made as a sort of hybrid between the two above - "precision" and "weight capacity" of EQ5 combined with portability of EQ3. I can't really say if that was successful or not. My guess is that EQ35 is a bit better than EQ3 - but not quite as good as EQ5 for imaging.
  2. My critique was specifically about "choice of color palette" as you put it. These images are result of the process of the same data. If data captured has certain ratio of R, G and B (for each pixel), regardless if that ratio is correct one (if it has been color calibrated or not) - how can it end up being vastly different color? Interesting part is that this is not case of odd image that shows different color from the rest - they all show different color in one part of the image or other.
  3. How did we get to this? Any resemblance? I do get that different people will produce different images - but this is all from the same data, how come that there are such a different color results? You can see original works here:
  4. Not sure why are you quoting me on that as that is exactly what I said in the sentence you quoted (although you quoted only part of it). Whole sentence that I wrote goes like this: That is exactly the same thing except I wanted to point out the relation with interference pattern of double slit as both show correlation of pixel landing positions (even in single photon interference case). I don't follow - no one was talking about sensor size, were we? We were discussing critical sampling frequency and F/ratio needed for pixels of certain size to capture everything aperture can render. Can you provide a reference to "spot size"? I never heard of that term being used in context of telescopes (it is related to lasers and gaussian beams). I've heard of spot diagram - which is something that optical designers use - but is a tool to understand optical performance of the system using geometrical optics - which is not something we are discussing here - although there is some relation to airy disk. That is complete nonsense. I looked at the reference provided, but I was not able to find the exact statement (that "1/8th wave optic blurs a star to 4-5 airy disks"). Would you be so kind to point me to the exact place where this is quoted from? We are here not discussing effects of long exposure imaging, but rather lucky type planetary imaging where we try to capture moments of good seeing and reduce impact of seeing to the minimum. As such it is very important to understand limits of optics without impact of seeing. We hope that in few rare moments we will be able to utilize full potential of aperture - and we aim for that. We can also argue the case when we do long exposure imaging and we have full impact of atmosphere - same rules apply, but only in that case we won't be looking at airy disk to be our blur and thus provide cut off frequency, but rather achieved star FWHM that depends on aperture, seeing and mount performance. In that case - Nyquist theorem still applies - but we determine cut off frequency in different way because of all variables involved.
  5. But yes they are correlated. If you want to go into realm of quantum and discuss individual photons - then look at for example double slit experiment. Although individual photon landing positions are very random - they are actually governed by wave function interference pattern. Which means that there is strong correlation between photon detection and wave shape - where there is complete destructive interference - no photons can be detected. Similar to this - Airy pattern is round aperture equivalent to interference pattern which forms from double slit - it is wave function interference pattern. It also has complete destructive interference. At certain points - there is actually 0 probability of detecting a photon. You can see above curve as probability distribution for photon detection or you can see it as what the sensor detects given enough photons, and indeed: In the center you can see that it is almost continuous function of photon count while at periphery of the image you can spot discreetness of photon counts because signal is much lower. This correlation is precisely what creates band limited signal - if you examine this pattern by means of Fourier transform - you'll get low pass filter response that we know as MTF of optical system and it looks like this: Point where this line (and it is represented by line as it is cross section of rotationally symmetric 2d function) falls to 0 is cut off frequency. It only depends on aperture size for circular apertures.
  6. There is no difference as photon rate is continuous signal. Photon rate and thus sampling of it does not depend on light intensity or exposure duration - it is only measurement noise that depends on these, and noise is not signal - it does not act as signal as far as optical configuration is concerned and is not band limited - it has rather uniform frequency spectrum. Imaging produces photon rate, if one chooses to do so - which is not whole number but fraction. One just needs to divide number of photons captured with exposure duration. In limiting case when exposure -> infinity, measured photon rate -> true photon rate (noise goes to 0).
  7. @Peter_D I'd like to point out that using different method and getting different result is a bit like saying: I'm using different method to solve quadratic equation and I'm getting different results - but I think that way is the best Above approach is directly related to the fact that aperture produces Airy pattern at focal plane (physics of light) and that Airy pattern acts as low pass filter with above mentioned cutoff frequency (Fourier transform of Airy pattern shows this). As such, it is band limited signal and according to Nyquist theorem you need to sample it at twice highest frequency component of band limited signal.
  8. As far as I remember - imaging app triggers dither once exposure is finished and it does so by issuing command to guiding application. Guiding app is responsible for dither move - which is basically just long guide pulse in random direction. PHD2 is doing it usually once it is instructed by imaging app.
  9. I can't seem to find anything in calibration files so it looks like it is neither amp glow nor light leak. Judging by stack structure - it is most pronounced in Red channel - this leads me to believe it is related to light pollution, but it's not light pollution itself as it's not linear gradient (or close to linear). There is component of this that is linear gradient, but there is something else. Can you examine your subs, I think that one or few subs might be having a passing high altitude cloud in that spot.
  10. Not sure if this is applicable - it's like saying that Sun can be seen in midnight - because it can be seen by someone as our midnight is their noon. They can see it, but we can't Similarly, Sun will be at highest position when it is noon in our local time, so will incidence of detected meteors be highest when it is midnight (or past midnight) at our local time. There will be 6pm somewhere else - and at that place - they won't see their highest incidence at that moment - but later at their midnight.
  11. Apparently up to age of 20 - even 10cm is fine, but I don't seem to remember holding anything that close to my face when I was that young - well with exception of threading a needle, yep that was done on such short distances
  12. Yes, I was just thinking of that - "second part of equation". We can control how we record and process image, but we can't control how it is being displayed / viewed. Computers screens are matched rather good to human vision and viewing conditions. Most of us view computer screens at reasonable distance and 96dpi matches that. Mobile devices on the other hand are somewhat "problematic". Mega pixel craze is apparently working in that area as well. Most of modern phones have display densities of around ~400dpi (or ppi to be precise in display terminology) with tendency of increasing. That is simply too high to be match to ordinary people. If ~100ppi is sensibly matched to human vision at ~70cm, well then ~400ppi would require 70/4 = 17.5cm distance. Most of us don't hold phone screen at 17.5cm distance to our eyes. In fact most people past age of 40 can't even focus that close without reading glasses.
  13. Yes, it is interesting, isn't it. It depends on what type of sensor it is - if it has micro lens or not and is it bayer matrix or not. There are few examples that google image search returns: This one is particularly interesting as it shows manufacturing defects as well: Some of those pixels will have lower QE. Another reason to use flats - even for planetary. Computer screen pixels are interesting as well - also not something that can be easily guessed:
  14. I was just thinking of that as an explanation. After midnight - several things come together. Position of radiant and hence part of the sky that you'll be able to see and darkness and level of LP - all that comes together after midnight until the morning when sunshine spoils everything.
  15. Yes - that is classic case of "pixelation" - but there is no pixelation at all. When you enlarge image beyond 100%, software needs to "make up" missing pixels - as there are more screen pixels than pixel in the image (image gets stretched and we need to put something in those "holes"). There are different ways to do it and it is referred as scaling interpolation (in this case up scaling as we are making image larger than original - opposed to down scaling which is making it smaller). Choice of algorithm used for this is what creates pixelation - or rather one particular algorithm called nearest neighbor resampling is guilty of pixelation and idea that pixels are small squares. Pixels are not small squares! Neither image / mathematical pixels (which are just points without dimension) - nor camera pixels which are often little circles or rounded up squares rather than prefect squares: This is what color pixels (and underlying circuitry) looks like under microscope. Here, look at this: That is small image (one that I rescaled and posted above) - zoomed in to 1600% - using nearest neighbor algorithm. This is IrfanView software for viewing images on Windows. It has option to turn on/off proper interpolation when zooming in: Look at this, same image, same zoom level - but this option turned on: No more "pixels" It is due to zoom level and the way software handles zoom. Photoshop seems not to have any sensible interpolation when zooming in - look at this answer: https://community.adobe.com/t5/photoshop-ecosystem/visual-error-erro-visual/m-p/10837190 to quote:
  16. You can do simple trick. Make your landscape close to you and make your landscape in such way that it hides horizon from you. If you make landscape close to you - like few hundred meters - it will not get chance to get attenuated by atmosphere. If you look at mountain in a distance versus some object that is close in daylight - you'll notice that mountain is hidden in haze, often desaturated in color and without much contrast, while close object will look normal. You don't want the "mountain" to be your landscape - you want "grass" to be your landscape. There is much less atmosphere between you taking the shot and nearby object than it is between you and distant object. Next part of trick is to make landscape cover part of the sky near horizon. Shoot "up" and not "down". Even in desert where air is dry and clear and there is no much light pollution - you'll get murk if you shoot it so that distant objects make your landscape and horizon is visible: But look what happens if you choose your landscape to be close and to cover horizon part - (like when you stand at the foot of the hill so that hill covers horizon): Left half of image - no haze, right half of image - haze. Difference? Well, left part of the image has landscape that is close and it's hiding horizon. Hope this helps
  17. Not sure even where to start. I don't want to additionally confuse you, and in part - I'm also confused with some of the things you said. When you say that image is pixelated, what exactly do you mean? I often hear this - but in reality, "pixelation" of the image has nothing to do with image itself - it has to do with the way image is displayed and depends on device and software that displays the image. On the other hand - if you do too much wavelet sharpening and there are some stacking artifacts - either because of low bit depth or due to compression or what not - you'll get effect similar to pixelation that one might confuse with "actual pixelation". In any case, what I'm trying to say is following: - Planetary images are best captured at critical sampling rate / critical resolution. This puts an limit to the size of planets in pixels that depends on aperture size of your telescope. If you "decide" that you want larger image of planet - enlarging it will not bring any additional detail. Also, technique used for enlarging image is the thing that is responsible for "pixelation" and not the image itself. I'm against software enlargement, and I let people examining the image to decide if they want to zoom in a bit more - that depends on their display and their vision. Image should be captured and presented at resolution that supports the level of detail in the image. Here, look at image that you posted at 100% level versus same image that has been rescaled to critical sampling resolution: Not sure what these look like at device that you are using to view the on, but I'm using computer screen with 96dpi at 70cm away - which means 1px is roughly 1.3' - or just slightly above 20/20 vision resolution which is one arc minute. To me - bottom image is properly sampled - or shows no pixelation, and shows detail that is "compatible" to this resolution. If I start zooming it in - I'll see no more detail, and if I move a bit back and start to look at it from 2m away - I will be limited by sharpness of my eyesight - I'll start to loose detail. Above image is just blurred rescaled version of bottom image - it does not provide more detail and if I zoom in bottom version to 800% (because they mismatch at exactly x8 in resolution), I'll get the same image as you above in terms of detail: Yes, this is simply enlarged bottom image. But why enlarge it in software - we should let every user adjust the image to match their display device, and only make sure that we match level of detail in image to resolution / sampling rate of the image. By the way, if you want to see level of detail that is better suited to this scale, then it would be this: Does any of this make sense?
  18. Why is your Jupiter gigantic in that image? It is 950px in diameter, but your telescope is only capable of recording it to resolution of 119px at the moment (48.8" diameter, 0.41"/px). This is what your image should look like: And when you present it properly - it really looks like a excellent capture, but question is - why is it so large in your processing? What sort of barlow did you use and did you drizzle for some inexplicable reason?
  19. Probably not. In principle, mathematics of what is happening is "known" - signal is weakened as you move down and there is LP gradient as well - so in theory, you could do it with some clever math, but the problem is SNR part of things. If light from target is blocked too much - you simply won't record enough signal in single exposure to be able to "reveal" it. When you subtract the LP signal and boost back attenuated target signal - all you'll get is just noise.
  20. That sort of looks like amp glow, but could be a light leak as well. What does your master dark look like? Do yo use viewfinder cover when taking your subs?
  21. Just forget everything about EVs and ISO and all of that - that you use in daytime photography, when shooting astrophotography. It is counterproductive and often confusing. Noise is complex topic, but here is important part - you don't worry about the noise on its own - you worry about signal to noise ratio. If you want to have cleaner looking image - you need longer exposure to gather more signal - as signal grows faster than noise. If is often not feasible to keep aperture open for say half an hour or hour - and that is what stacking is used for. Stacking is method of "adding" multiple exposures to form one long exposure that is (almost) the same as taking one long exposure of same total duration. If you wish - we can go deeper into topic of noise and how it works, but for the time being - stronger the signal (that even includes transparent skies and position of the target) and longer total exposure - better SNR of the image (so expose for longer by taking many images and then stack them). That is perfectly normal. Atmosphere is not transparent - it acts almost as ND filter. More dust and particles there is in atmosphere - stronger the filter. Clearest sky is at zenith. When you look down towards the horizon - stronger ND filter becomes and it simply attenuates light from target. You can see this effect every sunset / sunrise. You will be able to look at the sun when it is just about to set without any filtration. It will be red and light will not be strong at all. Contrary to that - you better not try to look at the sun at zenith - it will be painfully bright and it can even hurt/damage your eyes seriously if you do that. There is second part to haze - and that is light scatter. Sky is blue because it scatters light - otherwise it would be transparent. Same thing happens at night - except there are fewer light sources. In heavy light pollution, sky will look orange (because of type of light pollution) - and it has to do again with density of the sky - when you look towards horizon - there is much more of the sky in that direction (thicker filter) and also dust tends to stay low (air density thing). This means that low down towards the horizon, LP scatter is the strongest. Those two effects combined - strong ND filter and strong LP scatter - produce that grayish looking murk when your shot includes part of the sky that is close to horizon.
  22. Just fix the secondary and repeat the test. Tweaking secondary is rather easy - find bright star - place it smack in the center of the frame, defocus - and tweak secondary until you get concentric rings.
  23. Forgot to say - if you want to try "LRGB" approach with narrowband imaging - then get duo/tri band filters. These "mix" signal in real time and background is added only once - while signal of Ha and OIII (and even SII) is recorded at the same time. This provides the same benefits narrowband as L does to LRGB.
  24. Depends on target composition, but in general case (or in majority of the cases) - it will hurt more than help. Rationale is as follows (very simplified): one of following 4 cases can happen and we need to observe signal to noise ratio - we are going to observe target signal to background noise (not quite how we define SNR - but simplifies analysis). We will also observe signal as on/off rather than continuous spectrum of values - again for simplicity. Method of stacking will be addition (again simplifies reasoning). 1. No Ha and no OIII - background noise - noise will be increased with stacking 2. No Ha and OIII - signal of OIII will remain the same as in OIII - no signal increase 3. No OIII and Ha - signal of Ha will remain the same as in Ha - no signal increase 4. Both Ha and OIII signal - signal will increase as we add Ha and OIII signal First part will happen to background - so stacking OIII and Ha will certainly increase background noise. Signal will improve only in places there is both Ha and OIII signal - and in most cases signal is not equally distributed (otherwise we would have "monochromatic" targets with no color variation) - so this is probably the least likely scenario. In all other cases - signal remains the same as in individual stack - no signal improvement. So there you have it - very simplistic analysis - signal will not improve in most cases while noise will raise in the background - ratio of the two will therefore go down.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.