Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

vlaiv

Members
  • Posts

    13,106
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. Depends on the mount you are trying to guide. Longer is better, provided that your mount supports it - it minimizes seeing influence, and indeed you won't be chasing the seeing. Some mounts are capable of being guided with 10+ second exposures - but not many. You need to have a smooth running mount with slow periodic error (and almost no random jitter) to be able to do that. You can figure out what is the best guide exposure for your mount if you record periodic error (or periodic error residual after applying periodic error correction) and examine it for max error change (or speed of error) - this is given in arc seconds per second or perhaps percent of sidereal speed - in either case unit is the same "/s. From that you can figure out how long your guide exposure can be - by setting upper limit on how much you want your mount to deviate from intended position. If your mount for example has 0.1"/s as max error rate and you allow for 0.4" absolute error - then you can use maximum of 4s guide exposure (or rather guide cycle - if you want to be very precise about it - it is exposure + latency + guide pulse duration, but in reality you don't need to be that precise because most other things related to this aren't). If this is too complex for you (don't have PE/PEC done and don't want to do it) - another measure is sub FWHM vs guide exposure length. With EQ6-R you should be able to do at least 3-4s guide exposure and that is what you should be using. Use shorter than that if you asses that by using such exposure duration you are constantly getting higher star FWHM then with shorter durations. Try going higher as well and use same measure - if FWHM gets better - use longer guide exposure. On my belt modded HEQ5 I usually use 4s guide exposure - sometimes less if seeing is particularly good. In different conditions, I don't see much difference between 4s and 6s. I went up to 8s, but that is causing another problem - sync with dither, and waiting for settle time increases. That is ok for CCD style imaging where one uses 10 minute subs, so dither can be 30s or so, but I use CMOS and short exposures (1-2 minute), so having 30s dither and settling will cause as much as 20% of imaging time wasted.
  2. I would choose 16" dob. Both have advantages and disadvantages, and it's more - what I can live with rather than which one is better. Of course, my choice is biased by both my previous experience and my interests. I suspect that portability will give you same sort of headache in both, but of a different kind. 11" SCT will require quite heavy EQ mount to be stable - at least something in EQ6 class. This means around 35kg of gear for that (mount head, counter weights and tripod + any sort of battery for field work is extra). Scope is around 12Kg, so that is totaling at about almost 50Kg. 16" dob is probably going to weigh about the same if manual (although there are light weight versions that total at about 40Kg, but these don't have goto for example), but some examples will weigh as much as 90Kg total! (like SW 16" goto dob). Setup time will probably be in favor of dob. "Compactness" (if one can use that word with such large scopes) will be a plus with SCT (tripod legs fold, mount head detaches, scope is shorter - but this adds to setup time as you need to assemble more parts in the field). If you don't need goto - maybe better option (lighter) would be to go for regular dob and EQ platform. This will give you about hour of tracking (before "rewinding"), will cost less and probably be less bulky, but you won't have automatic finding of objects (if that is something that is important to you). Other quirks include - Dew on SCT, slower cool down for larger optics (this is a minus for dob, but can be handled with fans in both, and also there are dew heaters) - some don't like observing position with dob. I'm used to it, but my dob is 8" and therefore quite small - I do my observing seated down. 16" is going to be quite high, and you'll need to stand up most of the time and even use step ladder, depending on your height. Optically, 16" will have an edge, for both planets and DSO work. Edge on planetary performance, you probably won't notice 99% of time. You need very steady skies to reach difference with apertures of that size. DSO difference will be noticeable, no question about that. Some of above might not be "applicable" to your case, since you live in Israel, right? (reading from your account data). I guess humidity might not be an issue most of the time (unless you are close to sea) and desert climate tends to provide steady atmosphere (but also rapidly changing temperature) - so one is a plus for larger aperture, while other is a minus - rapidly changing temperature prevents telescope to be thermally stable. For planetary imaging I would also choose 16" aperture, and dedicated planetary camera (rather than DSLR). If DSO imaging is of any interest, then that pushes your decision in SCT direction - not because of the scope, but rather because of the mount. You can reuse same mount for AP - and get yourself manageable scope for AP (don't try to do it with SCT - you will run into host of problems and that is not something that you want to do if you don't have experience in AP and want to use SCT for imaging for any particular reason). Last thing - budget, 16" dob with EQ platform will probably cost at least 50% less than SCT and EQ mount, but that will largely depend on model (dob model in first case, and mount model in second case, and of course SCT model - Celestron regular, or EdgeHD, or maybe Meade one ...). This is for "base / bare bone" system. Accessories will probably cost more for dob. I know, that sounds strange, but 16" dob is fast scope, and you need to account for that so if you want very good view with it, that means - coma corrector for visual and higher end eyepieces that can handle the speed of the scope. With 11" SCT you don't have that issue - at F/10 most EPs will work fine. Hopefully this will help your decision.
  3. Don't remember that either. In all probability it was less than 10s or max that duration. I don't remember how much subs I took - but it must have been few dozen or so (20-30 probably, I doubt it was more than that).
  4. Not sure if you are going to see much LP in short exposures such as 15s - but that does not mean that there is none - it just does not build up too much. It's going to be easiest to spot when you do a stack of subs and look at final result - there will almost certainly be some gradients in the background. I think that decent LP filter is going to help a bit. Maybe best effect will be seen from UHC type filter - that is not general purpose LP filter and it is mostly suitable to emission nebulae. Other than that, you need to figure out type of LP (at least dominant type - if your sky at night looks orange - or has such hue that is good in terms of LP suppression - as it can be filtered, but neutral gray/bluish is worse - it usually means there are plenty of LED lighting around - that is harder to filter out). Good LP filter for general purpose (galaxies and such) is not going to be cheap, so it depends on your budget. Given that you are using barn door tracker, maybe it would be better to invest that kind of money into small ap mount like star advanturer or AZ-GTI (with accessories to go EQ mode), unless of course using of one's own DIY tracking device is part of the fun in this hobby.
  5. Good image. Maybe put an alignment point on the moons as well - they seem stretched vertically a bit like this? Going with shorter exposure might help - somewhere around half of that - 6ms or so.
  6. Hahaha, this one is a tough one. I'm now almost confident it was the mentioned satellite. Just ran astrometry job on image and compared "extra" star position to motion of satellite - and I think it matches extraordinary well If you look at "over exposed" version of the trail, you will see brightening and dimming trail that matches trajectory of that satellite. Very, very interesting that it flared for such a short period of time.
  7. I set my location to Dudley and at time shown at @wxsatuser here is how fast it was moving: It moves roughly 11'5"/s or 665"/s - now that is seriously fast. If we go by upper bound of 5"/px, and we assume that motion was order of one pixel, that would mean that "flash" lasted less than 7.5ms. We know that 4.2 mag star saturated in 15 second exposure image. This flash lasted for ~ x2000 less. That would mean that it would need to be roughly x2000 stronger to saturate (this is not quite right as we don't know how much star saturated, but let's go with that). That would be around 8.25 mags brighter, so it would need to be -4 mag to saturate sensor for such short duration and it does not look like it reaches that bright from a graph that I gave. Also, I don't think that it could flare so short without having brightening and dimming phase before and aft - and that would leave a trace.
  8. I'm not convinced unless there is a flaw in my reasoning / calculation. It moves really fast - so it should either leave trail - if flare was not almost instant, or it should be fainter if flare was really short - which would make it "stellar" looking. Do you have exact time of the frame / location - I would like to look up position in Stellarium at the time, but also measure speed that it moved with at the time relative to you (angular / across the sky)?
  9. Hm, need to retract above comment Here is a screen shot from a paper in either Chinese or Japanese or some other oriental language that uses logograms - certainly one I can't speak or read, but graph is self explanatory I think: From the graph it looks like GLOBALSTAR M006 is capable of producing brightness of Mag 2 star. That is about x8 brighter than 52 Cyg, so should be able to make same "footprint' in about 2 seconds. However - that thing just zooms around at about 3'-4'/second, and at 300mm lens, with even large pixels, you still need it inside 5" to make above signature, and that would make flare last at most ~30ms. That is almost 100 fold lower than above 2 seconds - which means it should look like mag 7 star - certainly not saturating.
  10. After some thought we need to check if it is actually possible that said satellite could possibly caused such "flare". I have certain reservations about it so it is best to do "sanity check". We would need to calculate angular speed of said satellite - we have basic info on that: Orbital period: 124.3 minutes (semi major axis - 8252 km) We would also need resolution of above shots in arc seconds per pixel. We can conclude that flaring object did not move more than one pixels for duration of flare. This will give us maximum total flare time. Next we need to calculate amount of reflected solar photons from flat surface of certain size (we can just make approximate there based on Sun magnitude in Earth orbit and some surface area of reflection common for satellites, or even find exact size of solar panels for that particular satellite). We can model reflection to be 100% without much impact on result. Then we need to see how much photons will reach aperture on earth (given the distance and aperture size) and integrate for max duration of flare. I have a feeling it will not be enough to saturate sensor for that brief moment - but it might be.
  11. Any sort of glitch in tracking would leave a trail. If star is bright enough to saturate sensor - it will leave a trail due to either - slow motion of scope between shift positions, or if jump is very very quick - due to OTA/Lens shake.
  12. That would be possible explanation if there were significant mismatch between brightness of two stars. I'm not sure, but it does seem that "ghost" image is actually brighter than 52Cyg - it features horizontal diffraction spike (very faint) as well - not present on 52Cyg. Even if this diffraction spike is caused by lens and star position in FOV rather than intensity - they do both look to be about the same in intensity (or near). This would mean that about equal time of exposure is needed for both - other less bright stars would also have identifiable double in this case. Meteor down the line of sight is plausible but very low probability event - if star image was at least a bit out of shape - elongated in one direction - that would be much much more likely scenario.
  13. This is interesting, not sure if I can provide an answer to the question though. Might be some sort of transient feature - very interesting, as I've started a thread yesterday with idea of trying to find such transient features and their relation to Fast Radio Bursts. This one has "expected signature" - star like flash that is obviously not cosmic ray hit - appearing only on one sub in the set. Here is the thread for anyone interested:
  14. I can see two moons in fact - nice image.
  15. You can make one, but it is really up to how you use it. Bad pixel map is "contained" inside master dark, or rather based on set of darks you can identify all the pixels that are bad and treat them accordingly (usually you can either mark them as bad so when alignment of subs is performed, these values are not used and are interpolated instead, or you can do it prior to alignment - in calibration phase). It is up to software to handle this, and bad pixel map is basically the same thing as dark lib, except you are not interested in dark current signal - only pixels behaving badly - like those that are dead (returning 0 value), hot (returning clipping max value) or stuck pixels - always returning same value regardless of signal capture. All of these can be inferred from set of dark and pixel statistics and it's up to software that you use to make appropriate action.
  16. Doubles the data collected? You'll have to be more precise. If you keep your sensor and your exposure length - amount of data collected per set total imaging time will be the same - that is number of exposures times amount of data per each exposure. It is not the lens that has impact on amount of data. Swap the sensor for one with more pixels (even smaller chip, but more "megapixels") - you will collect more data. Use shorter exposure regardless of F/stop - more data collected. If you were thinking about signal per single exposure - then yes, keep the exposure length and sensor same and change F/stop of lens - level of signal per exposure will change accordingly (you are keeping everything the same just changing light collecting surface - greater surface more light collected and vice verse).
  17. Topic of LRGB vs RGB is seriously complex one and even without unknowns (spectrum of LP and that of target) gets complicated really quickly real fast. Fainter the signal, and more pronounced single component (be that R, G or B ) is - greater the difference will be in favor of LRGB approach. It is down mostly to read noise, but color plays very important role as well (ratio of signal strengths in R, G and B channel). Maybe simplest explanation why LRGB works better can be found in edge case (not likely to ever have it dominate the image, but it does make a point): Imagine you have 3 hours of LRGB vs 4 hours of RGB (12h total imaging time). With LRGB you already have 3h worth of luminance, but in order to process RGB data - particularly to do a color independent stretch you need to create luminance data from RGB. Let's assume for simplicity case that it is sum of R, G and B channels. Iit is in fact weighted sum, and best matched L to visual color intensity, if color data is calibrated to sRGB standard: so it is in fact 0.2126*R + 0.7152*G + 0.0722*B = luminance (Y component best matches human luminosity perception). We will stick to 1/3 each combination for simplicity but argument holds. You can say that 4h of "synthetic" luminance is better than 3h of real thing, but it actually might not be the case. Let's say that you have certain read noise and that you take 6 minute subs in each case. L component will have 3x6 doses of read noise. Synthetic lum will have 4x6 * 3 doses of read noise. Faintest signal will be most affected by this, especially in parts of the image where you have single channel signal - like for example Ha nebulosity - it will be picked up by red channel only. Now you are creating luminance there by adding empty G and B channels with no signal, yet each of them has read noise, so you are effectively only adding that read noise to synthetic luminance. On the other hand, when doing LRGB, you don't need to do equal times - you can do something like 6:2:2:2 and get even better result. This is because of human perception of noise - we are far more sensitive to variations in light intensity then in hue/saturation, so you can trade color noise for smoother luminance and image will look better. After all, that is the reason why bin 2x2 when shooting R G and B works - it won't capture best detail but that does not matter - as brain picks up detail far more easily from luminance data (there are other areas where this fact is exploited - for example JPEG compression relies on this as well - it looses more information in hue/saturation domain than in luminance domain to achieve compression). As for RGB providing better saturation - it is solely down to processing as you capture same RGB data when doing LRGB (using same camera/scope/filters) - so color information will be the same, and only processing can loose color data if not done properly but capture process will preserve it.
  18. Not sure if exact sync of subs is needed - if anything resembling a signal is detected on any two of subs - each one at different scope / location but at the same celestial coordinates and same time frame (actual time frame will be dictated by intersection of two exposures that captured it) - then we could say we have something interesting - of course multiple scopes / location would only increase confidence of genuine event. Don't think anyone should "double" their kit just for the effort alone, but many people do image night sky and there is a chance that some of them will image same part of the sky in the roughly same time frame. Actual effort needed would be to first come up with protocol for examining subs from a session and extracting possible candidates - this should be fairly easy - sigma clip stacking and instead of taking image result we examine rejected pixels or group of pixels appearing on only one sub at particular location. Next we create list of candidates which can consist out of just few numbers: RA, DEC, Start time, End time, any sort of rough photometric measurement - to get rough idea of intensity. Such data can then be submitted to repository which will do cross reference checks and produce "confirmed" events (any submitted events that are close enough in given coordinates). Of course this would need software support, but for start it can be simple two or few party collaboration - much like people do with imaging - they agree to do collaboration on target to produce an image. Only difference here being that end result is not image (although it can be) - but search for fast transient feature, and that it needs to be at same time interval (not so with imaging). Indeed, we have no idea if phenomena will be detectable in visible part of spectrum - either because it does not produce radiation in that part of spectrum, or maybe radiation is too small to be detected (either by size of instruments, or even in principle due to distances involved and attenuation in interstellar / intergalactic medium). We could do rough calculations though - if we get total power registered by certain telescope - and as far as I read, it is quite strong - comparable to total solar energy output in a single day. We could do simple (and I'm certain wrong) assumption that it follows black body radiation to figure out how much energy there would be in 400-700nm range given power in radio domain and scale that to "sensible" aperture - like 100-200mm and convert to photon count. If we get anything "detectable" - meaning at least 5-10 photons or more per event - then it would make sense to try.
  19. Just been reading latest astronomy news (as one does on a quiet Sunday afternoon ) and I noticed that FRBs are quite a hot topic at the moment. That got me thinking, could amateur community get involved in some way. Obviously direct observation is waaaay out of reach, but could there be another way? After all, we don't have a clue what might be causing them, and we don't know true "extent" of the phenomena. What if this phenomena is not restricted to radio part of spectrum, and in fact there is emission over the whole spectra? That would mean that there should be signature in visible part of spectrum as well? Given the intensity of phenomena, I don't think it is far fetched to assume that there should be at least couple of photons, even more for very brief duration that could be picked up by amateur telescope. Maybe we already know about this phenomena and attribute it to something else? I'm aiming here at "cosmic ray hits" that we get on our images from time to time. We consider them to be just noise and use different algorithms to remove it from our data - like sigma clip stacking. What is some of these events that we classify as cosmic ray hits in fact come from FRB type events? Since majority of FRB events are one time thing - it would not repeat on multiple frames - we would not have a reason to believe that it is anything more than a "glitch". How would we then go about trying to detect such events? I think answer is real easy - collaboration. Comparing subs that different people took with different gear at different locations at the same time of the same region of the sky could be a way to detect this. If "cosmic ray hit" occurs on more than one instance at roughly the same time (subs are unlikely to be precisely timed, and are likely of different duration - but there is certainly going to be overlap and FRB events are very short - chance of spanning multiple subs is effectively zero) at the same place in the sky - then I would say we have a likely candidate for FRB type event. So first stage would be to try to see if such events actually exist (and are within the reach of amateur equipment). Next would be to try to match that with detected actual FRB - as far as I can tell there is significant effort within science community to increase both detection rate of these events and also trying to get their location precisely. In couple of years I think it will be common place to have multiple events detection each day with their position known - then we can compare our data to it and see if we find any matches. What ya all think of this?
  20. I think that this was exactly what @ollypenrice tried to explain with this post - "crop factor" has nothing to do with "/pixel. Resolution in terms of "/pixel is defined only by focal length and pixel size. Sensor size is not directly related to this - there are small sensors with large pixels and large sensors with small pixels (and anything in between). I initially misunderstood the post because it lacked attached images for some reason, so I did not pay attention and I thought that we are talking of crop factor in terms of FOV / lens rather than cropping image and thinking it will somehow change "/pixel - give one more "zoom". This topic actually touches upon a subject that I wanted to start discussion on for some time now - difference between "recording" resolution and "presentation" resolution. In principle you can record on higher resolution (less "/pixel) but still "present" image at proper sampling/resolution given conditions image was taken in (seeing and even guiding). In this case "sub optimal" guiding is not such a big deal - one only has to accept that they will be able to do images of certain detail scale (what we call resolution), even if one has scope/camera that is capable of recording finer detail scale in principle (higher resolution). I think I've develop very good "fractional" binning algorithm that would facilitate actually choosing presentation, or work resolution based on measurements of the image.
  21. Image seems to be stacked ok - but yes, there is walking noise that is not "walking properly". Can you inspect individual frames and see what sort of movement between frames you are getting (follow particular star or set of stars and see how they move). Another option that can help diagnose is to stack subs without aligning them - you should get a sort of "star trails" - depicting how your mount moved or dithered. If you are guiding (which is most likely in order to get this sort of thing) - problem could be poor polar alignment. Maybe could be due to orthogonality error, but I'm not sure about that. If you are not guiding - then maybe some sort of weird cable snag or mount friction that is reducing motion in one direction.
  22. Could be focus issues, but also collimation and cool down issues as well. I think it's worth trying though. In any case, you can use 14" Dob to produce same resolution image as with C8, but using shorter exposure time and getting higher SNR (less frames needed to be stacked - you can be picky about quality of frames and include only the best) - just match focal lengths and you'll get the same scale. Best tip for focusing is to do it on something that is point like - if you are shooting Jupiter for example - concentrate on one of its moons - and try to get the smallest "footprint" of it in frames - much easier than trying to focus on planet features when seeing is not perfect.
  23. Most dedicated astro cameras don't have internal storage (hence require laptop/computer). DSLR cameras do. "Must have" is somewhat vague term. In it's basic form, one must have - something to capture images with (any sort of camera), some sort of lens and something to put that camera on - and that is basically it. You can take astro images with point and shoot camera on a tripod. On the other hand, here is my list of "must have" features: - proper mount - one that is able to carry the load, has minimal backlash / flex, and guides good - proper scope - one that has enough corrected field to be matched with chosen camera, and has enough aperture. - Set point temperature astro camera - one that you can perform proper calibration on. - Guide system - unless mount can operate unguided (even then, I like the idea of guiding). If you are anywhere serious about going into AP, or rather DSO AP - mount comes first. There is simply no question about that. Guiding / no guiding will depend on intended targets and rest of the gear. If you shoot wide field - with short FL / large pixel setup, and you do exposures that are relatively short, then in principle you don't need to guide. Mind you, you don't need to guide even if this is not the case, but your star shapes will suffer and you'll likely discard some of subs because they are badly distorted. Under "Must" category I would say there is definite "item" there - You must not waste any time under stars - so you want setup that works (as little fiddling as possible), and you don't want to discard any of your subs. When it comes to DSO imaging - SNR is everything, so minimize noise, maximize signal, and one of ways to maximize signal is to get enough time under stars. We can discuss things further if you provide us with some pointers - like budget, maybe intended targets / type of images you aim for and such.
  24. Yes indeed - there are couple of differences between sampling LSF in spectrograph and planetary imaging. You pointed out one significant difference - finding line wavelength, or line width is of concern in spectroscopy - but in planetary imaging we are interested in image alone. Both wavelength and line width are sub pixel features and pixel size (and phase) have significant impact on precision of measurement of those features. Another difference is LSF vs Airy PSF. Airy PSF is band limited signal, there is definite and hard cut off point in frequency domain that is imposed by physics of the system. As such, it is perfect fit for Nyquist criteria of band limited signal. On the other hand LSF can vary in shape, often dominated by seeing influence (and hence being Gaussian/Moffat in nature) and possibly "clipped" by entrance slit. Such function is probably not going to be band limited in context of LSF and its FWHM which is quite a bit higher than aperture airy disk diameter. Authors raise this concern in abstract, here is a quote: Aliasing happens when you don't have band limited signal, or you sample at lower than max frequency. It is artifact of signal restoration where higher frequency components duplicate in the same way all frequency components duplicate, but for band limited signal these "harmonics" are out of band and therefore can be safely removed because we know that signal is band limited, where as with band unlimited signal you don't know and copies of high frequency components overlap with sampled signal causing aliasing artifacts. In case of spectroscopy I certainly believe that higher sampling rate is beneficial as discussed by authors of that paper, and to get exact sampling rate needed - one should consider shape of LSF and pixel blur in relation to required feature detection (line CWL and widths, etc ...).
  25. Let's do a comparison to see what sort of numbers do we get and if there is a match between the two. My proposal is to go for 2.4 pixels per Airy disk radius. Airy disk radius is 1.22 * lambda / d, while Gaussian approximation of such airy disk pattern is 0.42 * lambda / d - being sigma So number of pixels per Gaussian sigma will be 2.4 * 0.42 / 1.22 = ~ 0.82623 Sigma to FWHM is related by multiplying with 2.355, so pixels per FWHM should be ~1.946. This is close to my recommended value for deep sky imaging that uses "discard 10% of most attenuated frequencies" approach and uses Gaussian approximation of star profile. There derived value is 1.6 * FWHM. However it is quite different then recommended sampling rate of 3-6 pixels per FWHM found in that paper. I'll have to look a bit deeper into that paper to see if there is particular reason why authors recommended those values, or if it's the case of me simply being wrong somewhere
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.