Jump to content

vlaiv

Members
  • Posts

    13,265
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. Yes, but I don't think it was the case - here is per channel comparison of the largest star: There are 2 green, one blue and one red images of that star extracted from bayer matrix of posted sub. It looks like similar level of bloat in green (1 and 4) and red (3 - based on background level). Blue seems tightest and my guess would be that this is due to correction of the scope and focusing.
  2. @Swoop1 Did you use UV/IR cut filter? If not - star bloat can be because of that. ASI290MC does not have one inbuilt.
  3. Make sure you have IR/UV cut filter. You camera is mono and it has only AR coated window and you will be using that with refractive optics (even if it is a well corrected one - you still want to filter out IR and UV parts of spectrum to remove excessive bloat/blur). Other than that - nothing special. Setup your mount in EQ mode, do decent polar alignment and start of with shorter subs (15-30s) - see how you get on with those.
  4. Magnification of a barlow is affected by distance between its lens and focal plane and depends on focal length of barlow lens itself. In order to calculate exact magnification - you need to know this focal length of barlow element. Some companies publish this data. For example, Baader VIP barlow has Note that focal length is negative - this is because barlow is a negative lens (it diverges rays rather than converging them like positive lens). In any case, magnification given by barlow lens is calculated like: magnification = 1 - distance / focal_length (you must use negative focal length in above equation). For Baader VIP barlow to work as prescribed, distance between barlow element and focal plane must be: 2 = 1 - (X / -65.5) => X/ -65.5 = -1 => X = 65.5mm. If you change that distance to say 80mm you will actually have: magnification = 1 - (80 / -65.5) = ~ x2.22 Prescribed magnification is usually achieved when you place focal plane at the "shoulder" of the eyepiece inserted into barlow lens - or when you put focal plane at the end of barlow body: To answer your question - if you place barlow in front of diagonal you will get quite a bit bigger magnification than prescribed one.
  5. AstroImageJ is certainly worth looking into - it has a lot of astronomy related tools, mostly aimed for photometry and astrometry (it uses external plate solver I believe). However, blink sort of thing is rather easy to do in regular ImageJ (above AstroImageJ is just bunch of tools build on top of regular ImageJ - both are open source and quite extensible with plugins and macros) - it has feature called Stack - which represents sequence of images that you can work with. For example, to quickly see if there is something interesting in the data - I'd use plugin that registers frames based on some features (it'll use stars if you select local maxima and tweak some parameters) and then you can simply create standard deviation projection of that aligned stack. This will show if there are significant outliers in any of the subs. If there are any - then you can start "blinking" / searching for them and so on.
  6. Very good point from @The Lazy Astronomer above - what's been said is lower bound (and not a strict one). If you feel comfortable with going with longer subs - by all means. There will be less of them to store and manage if you go with longer subs. Sometimes it is better to have more data to work with and in case of accident (like bumping the mount or someone shining a torch towards the telescope) - there is less data to waste if you go with shorter subs (it is better to discard 1 minute than 5 minutes of your imaging time). Satellites and passing planes are handled by sigma rejection, so you should not count those as "discards".
  7. In that case - just check ADU value of highlight in your image (Star core) if it's 65000 something something - divide background value with 4 to get electrons, else if highlight is around 16384 then use number you measure as average / median background (select patch of empty sky to measure value)
  8. Not really sure, but all it really takes to measure it is calibrated sub form one of you previous recordings and knowledge of gain you used. ZWO publishes read noise vs gain for their models - so you can use that: and all you need to do now is to measure sky background in electrons - which is a bit tricky part because you need to properly convert ADU units that you measure back to electrons via e/ADU which is also published for selected gain by ZWO: Once you have average background signal and read noise - then it is easy. Say your background is 200e and your read noise is 1.5e. Important thing to note is that LP noise is equal to square root of sky signal so LP noise will be ~14.14 (square root of 200). That is ~x10 larger than read noise, so you can halve LP noise which means 1/4 of background signal (signal is square of noise) - In above example you can use 1/4 of exposure time you used to make that sub which has 200e background and 1.5e read noise. Makes sense? Btw - there are couple of threads on SGL that go into depth how to measure background / convert from ADU back to electrons (it's sometimes tricky as it depends on bit depth of your camera and so on) - so do a search and see what you can find. There might even be software that will do above for you. I've glanced over some discussions - maybe SharpCap can do it or some other similar software?
  9. I'm saying that decision on individual exposure does not depend on how faint your target is. Here is simplified explanation. Say you take one 300s exposure and you take two 150s exposures and add those 150s exposures (not average, but simply add them, although averaging and adding is really the same - but let's keep it simple). Only difference between those two exposures - 300s in one go or 300s in two goes will be how much read noise there is. Everything else is the same - in both cases you accumulated 300s of everything - target signal, sky signal, dark signal ... it all grows linearly with time and it does not matter if you split accumulation into multiple parts. All except read noise. Read noise grows with number of exposures as each exposure adds one "dose" of read noise. Ok. Now onto the next step. Noise adds like linearly independent vectors - like square root of sum of squares. Same thing as Pythagoras theorem, right? In above image AC is larger than both BC and AB, so is FD compared to EF and DE - but notice one thing - as one side of triangle becomes much shorter than the other - hypotenuse starts being just a bit longer than other side. This is the key for determining sub duration - you want hypotenuse (total noise) to be almost the same as largest noise source (LP noise) - and this happens when short side of triangle (read noise) is much smaller than long side of triangle (LP noise). This only works when stacking - it makes no sense for single exposure. Final difference in noise in stack (signal will be the same regardless - there is certain amount of signal in 300 seconds how ever you slice it) will be when read noise is sufficiently small compared to LP noise. Ratio of the two is important, and I advocate ratio of 5 - have your LP noise at x5 that of read noise then total noise will be just: total = sqrt(lp_noise^2 + (lp_noise/5)^2) = sqrt( ln_noise^2 * (26 / 25) = lp_noise * sqrt(26/25) = lp_noise * 1.0198 = lp_noise increased by ~2% You'll be hard pressed to see difference of 2% in noise increase - if read noise is 1/5 of LP noise - it will be like LP noise is %2 higher and there is no read noise. Makes sense? In any case - target brightness does not play a part in above explanation unless target is brighter than the sky and then we should use target shot noise to swamp read noise rather than sky shot noise / lp noise.
  10. Does not depend on target unless target is brighter than light pollution (which is rarely the case). Read noise has to be compared to highest source of noise that depends on time (dark noise, target shot noise or LP / sky shot noise, last one is usually the highest when we use cooled cameras).
  11. Depends on your setup and best approach is to measure it.
  12. Looks to me like some sort of cable snag. You have the same issue in both RA and DEC - which is unusual because DEC should really "stay put" without too much corrections, however you graph is almost the same in both axis: I'm guessing that cable is pulling on OTA in such way that it moves both axis and when you do RA correction - it actually manages to make cable give a little and it takes more for DEC to recover - then cable tightens up again, RA correction manages to get a bit more slack of the cable and DEC slowly follows. Binding would produce similar RA behavior but I'm not sure it would have the same impact on DEC though.
  13. It looks it happens fairly regularly - here is interesting pass on 16th of February next year:
  14. Interesting question, and yes - you have the answer - do shoot moon separately and Pleiades separately and do composition, but main question for me is how often such phenomena occurs? Lunar declination can reach 28 degrees and Pleiades are at 24 degrees - so it is possible - but for moon to be in the same spot in the sky - I guess it's not that often?
  15. @OK Apricot Here is summary on my part: - It is ok to have small pixels on long focal length as long as you understand that you will be over sampling and that the way to recover lost SNR due to this is to bin you data afterwards - You should aim at about ~1.2"/px sampling rate with 8" EdgeHD - but actual sampling rate that you should strive to can be measured given your conditions by following formula: FWHM you usually achieve / 1.6 = sampling rate. If you manage 2" FWHM on average, then you should bin your data to be close to 2 / 1.6 = 1.25"/px. Don't stress too much if you are off by some margin, and I advocate slight under sampling rather than slight over sampling as it will produce better looking image in post processing as you will be able to keep the noise down better and apply better sharpening. - Keep your guide RMS as low as possible. Rule of the thumb Olly mentioned - is just rule of the thumb - reality is, lower RMS = better image, or rather tighter stars / smaller FWHM. Seeing, scope aperture and mount performance add up in a certain way (not like normal numbers but rather as square root of sum of squares) so that if any of the three component is small compared to others - it contribution diminishes - so you don't need to go overboard with chasing good RMS if your seeing is not very good or you are using small aperture - like 80mm or so. For most setups being with guide RMS at half of sampling rate (provided that sampling rate is sensibly chosen) - simply starts to reduce its impact - and hence that rule of thumb.
  16. Could be, it is really hard to tell as differences are very tiny and they arise from different sources. Sometimes choice of algorithm can have significant impact. Level of stretch and amount of sharpening also, so processing also must be accounted for. That trick with Fourier analysis is very good one as you can vary how much of higher frequencies you decide to cut off thus allowing you to "dial in" the sampling rate - but it is best done on linear data (as most measurements are).
  17. That was to emphasize that if you are properly sampled and you remove high frequencies it will make a difference - but if it does not make a difference - well, you are not properly sampled and there is no data in high frequencies - which means you are over sampled rather than under sampled (under sampling will have data on all frequencies but will exhibit aliasing artifacts as well).
  18. Above comparison is not dealing with under sampling - but rather over sampling. I did that to show that astronomical image sampled at 0.6"/px is over sampled by roughly factor x2
  19. @andrew s This is probably the best way to compare original image and filtered one in frequency domain: I tried to match brightness in both this time.
  20. Of course I do accept that there are differences in leaf picture - that is why I posted it. Same math applies to both images, and math says - If there is something at high frequencies and you remove it - it will be obvious in the image. On the other hand - if there is no signal in high frequencies and you remove it - there will be no difference. This serves to prove that astronomy image is indeed over sampled as removing high frequency components only alters noise but not data itself. Sure it is hard to differentiate noise from data in low SNR areas - but above equally affects both high and low SNR areas - no difference there, so if you can't see difference in high SNR area due to this - you won't in low SNR area either. Maybe I can do another comparison - same thing, but this time, I'll blink image instead?
  21. Could you be so kind to point them out for me? I really can't see any differences in detail. I can in level of stretch - background is darker in bottom one, but that is just due to different stretch after forward / backtward FT. There is also difference in noise grain - again due to removal of high frequencies - noise spans all frequencies equally and will change its appearance with removal of high frequencies. But I can't spot any differences in data itself, so would appreciate you helping me out by pointing to those differences.
  22. I've yet to see aliasing effects from undersampled image in astroimaging. Here is a brief explanation of what aliasing really is and why it's not a major concern in astrophotography. If we start with some function that in frequency domain looks like this: Its sampled form will be this form repeating left and right all the way to infinity - spaced at sampling frequency intervals. Restoring original signal is then done by using box filter on original wave form to isolate it from all other copies. This can only be done if there is clear separation between wave forms - or the reason why: 1. signal must be band limited 2. we need to sample at at least twice maximum frequency (so that copies don't overlap) When we sample at lower frequency than one given by Nyquist criteria - we get this case: resulting frequency representations overlap / add and no filter can separate them. This is where aliasing comes from. Now back to astronomical images. In generation of astronomical images we have 3 major processes that create blur - Airy pattern, Gaussian Blur from seeing and Gaussian Blur from tracking error (provided that tracking error is random enough). Look at frequency response of all of those three components: Fourier Transform of gaussian is a gaussian form so we have: And Fourier Transform of Airy pattern is this: All these three combined (they multiply in frequency domain) - give function that is extremely sharply falling towards the zero. To put this into perspective, I'm going to use data posted on another thread (since I have it on hand and is still linear) and examine it in frequency domain: Frequency response has been normalized so that peak is at 1 - look how fast it plummets down towards the zero. By comparison telescope MTF part would be somewhere around 0.4 on right edge. Wherever you put vertical line in above graph to represent sampling point - overlap that represents aliasing will simply be so small that it won't be seen in final image. To prove that point further - I'm going to resample above image to x10 smaller size using nearest neighbor. In normal images that should create strong aliasing effects - but here we get: There is simply no trace of any sort of aliasing effect. Undersampling is safe operation in astronomy imaging and many wide field images are in fact under sampled without any issues (you need to undersample wide field image - you can't get 8 degrees into 2000px wide image without being at 14.4"/px).
  23. Well, UHC and CLS are not particularly well suited for planetary. They both have similar response but CLS is broader then UHC. They both try to capture OIII and Ha/Hb lines That is sort of counter productive for planetary as you have two almost opposing sides of spectrum together - so both short and long wavelengths - which makes it hard to optimize for either side.
  24. That is exactly my point. If you have image that is properly sampled and you cut frequencies above certain point - it will show in image. Take a look at one post above that - where I did the same - can you see equal loss of detail in that image - can you spot any place where you can no longer resolve feature - like smallest fibers not being resolvable in leaf example
  25. No, yes, depends Olly is right - if you want to get stacked passband graph - just "multiply" respective graphs. Treat graphs like 0-100% and multiply like numbers 0-1. If both graphs are at 100% - resulting value is 1 x 1 = 1 = 100% If both graphs are at 50% - resulting value is 0.5 x 0.5 = 0.25 = 25% if one of the graphs is at 0% - resulting value is 0 x whatever_number = 0 = 0% - combination blocks where either of two filters block (two or more). In that sense, combination of filters is always more restrictive on light then either of individual filters. Color will be distorted and there will be a loss of light. So that is NO part. However, using filters if you don't want to capture color can be beneficial to planetary imaging in several ways. Especially good are narrowband filters for this purpose - but you must find a balance. - Narrower the band of filter - less seeing disturbance there will be. Seeing acts differently at different wavelengths (see below) and when you narrow down wavelengths - you "choose" only one way seeing act. On the other side - narrower bandwidth means less light so longer exposure is needed and seeing affects you more because of this (it is harder to freeze the seeing with longer exposures) - Wavelength of narrowband filter is also a balance. Detail that you can capture depends on wavelength of light. Longer the wavelength - less detail. You can literally improve resolution x2 if you use 400nm light instead of 800nm (detail depends linearly on wavelength used). On the other hand - atmosphere bends shorter wavelengths more then longer (think rainbow and prism thing: blue light / shorter wavelengths are bent more then red light / longer wavelengths) For this reason, seeing is much worse at shorter wavelengths as there is more distortion. Again - trade off - will you go for longer wavelengths with better seeing but loose detail due to inability to capture detail at longer wavelengths or will you go for shorter wavelengths to be able to capture detail but risk atmosphere ruining it with seeing distortion. Like I sad - tradeoff
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.