Jump to content

vlaiv

Members
  • Posts

    13,265
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. Fact that you have different size lanterns - does not mean we can't still do the same. There is something called linear programming or operational research in general https://en.wikipedia.org/wiki/Linear_programming You create a volume of what is possible. Lookup smallest Chinese lantern that will actually fly. Similarly - do the same for largest. That puts bounds on distances. We can compare that with possible speeds - again very slow or very fast - and also on brightness of the flame - from very faint to very bright. Some things are linearly dependent, some are quadratic in nature - like brightness versus distance. But in any case - you will still be right - no way to prove what it really was. According to wiki: "A proof is sufficient evidence or a sufficient argument for the truth of a proposition" and I guess you can always decide that presented proof is not sufficient enough.
  2. Bingo - it is antigravitino trail that is deflected away from M31 as it is surrounded with dark matter!
  3. Well, sure that you can get cheaper, I just did not how you feel about CMOS sensors and software binning. I have ASI1600 and like it a lot, but if I was in the market for similar camera today - I'd probably go for ASI294mm instead. It has better QE and it is a bit larger (23.1 mm vs 21.9mm diagonal). Both will work well with 1.25" filters. ASI294 can be "unlocked" to 2.3µm pixel size. If you bin that x3 - you'll get 6.9µm pixel size which is very close to what you have now. That will result in 2760 x 1880 px camera. For use with EdgeHD - you can bin x6 for example - that will give you effective 13.8µm pixel size or 1.43"/px and you will still have 1380 x 940 px - which is similar to pixel count you have now. I use my ASI1600 on 80 F/6 apo and RC8" scope. With former I use it natively (with reducer / flattener) for 2"/px while with later - I bin either x2 or x3 for 1"/px or 1.5"/px - depending on what I'm imaging. This is for example 80mm at 2"/px - at 100% zoom (M13): This is RC8" at 1"/px at 100% zoom (although image is more like 1.5"-2"/px):
  4. However you put it - you'll need to spend about £500 or even more to get "all-round scope". One option would be to get two scopes: ST102 and Mak102 Former will be very good for DSO and wide field but very poor for lunar or planetary, while later will be excellent on lunar, planetary and double stars. Both are short scopes so easily packed / transported and will sit nicely on small AltAz mount like Az5. You can opt to go for 102/1000 type scope - but that is compromise. You'll be limited in how wide you can go and you'll still have obvious residual CA on high power planetary views. Better option is to go for F/7 ED doublet 102mm scope - but that by itself is already at £500 or more. Btw, I would avoid AZ3 mount if at all possible. It is not as good for astronomy. Even some other alternatives are likely to get you close to that budget. Maybe something like this: https://www.firstlightoptics.com/ts-telescopes/ts-photon-6-f5-advanced-newtonian-telescope-with-metal-tube.html + https://www.firstlightoptics.com/alt-azimuth-astronomy-mounts/skywatcher-az4-alt-az-mount.html That is 6" color free aperture that is still relatively portable - on AltAz mount. Will do wide field / DSO as well as planetary rather good. If 6" is too much - you can put 5" on it as well (in form of SW 130PDS) or go for this version: https://www.firstlightoptics.com/reflectors/sky-watcher-explorer-130ps-az5-deluxe.html Just keep in mind that above scope has only 1.25" single speed focuser. On a really tight budget and if you really want refractor - take a look at this one: https://www.firstlightoptics.com/evostar/sky-watcher-evostar-90-660-az-pronto.html With 70-80mm aperture mask, it will be decent planetary scope - not as good as 4" but decent.
  5. Never claimed that we can be certain about it - just wanted to point out that fun things can be done with analysis of this image. Let's take Chinese lantern as example. I highly doubt that it was the cause of said trail. It needs to be far enough so that it is not resolved as object. Here we need some info on equipment used - like pixel size and focal length - or we can just assume some sampling rate for argument's sake. Let's say that image is at 2"/px. Bright part of Chinese lantern must be smaller than 2". If we examine common image of the suspect - we can safely say that light emitting part is about 10cm in size? At what distance do we need to place 10cm so that it subtends less than 2" of arc? Answer is 10.3 kilometers away. Now that we have rough distance - we can measure length of trail in arc seconds and convert that to actual physical trajectory and ask, can Chinese lantern traveling at "normal" speeds (moderate wind speed for example or just regular Chinese lantern ascension speed) - move that much in 3 minute exposure? Next thing that we can ask would be - given time and place of recording - we can get direction of M31 and hence see if it is actually possible for Chinese lantern to burn at that place. Say for example that we determine that M31 was near zenith. Can Chinese lantern flame burn 10K up in the atmosphere given temperature and air pressure (oxygen availability)? How about brightness? We can calibrate recording against any of the stars in the image and calculate how many photons would a candle produce (we can estimate brightness of lantern in candles) at given distance and compare to what we measure to be photon count on segment of the trajectory. This was all I was proposing. In the end - we will just utilize good old scientific method - we form a hypothesis and test it against data. Hypothesis that best matches the data will be most likely explanation for what was captured in the image. Fun part is coming up with ways for that simple image to produce useful data.
  6. I think that there is enough data in that single sub to put some serious constraints on the object. We have total sub duration. We can measure light intensity. If we know time of the sub and location - we can determine possible altitude range if object was in atmosphere. We can also limit distance under same assumption. We can then calculate G forces involved depending on possible speed of object and curvature of trajectory. Given the size of the trail - we can assume that object was unresolved - hence we can get size / distance relationship. There is a lot of fun "detective" work to be done here.
  7. Ok, see - that is the thing, If you add telecentric lens to your TS scope - you won't have "more zoom" - without stars also being larger. Size of stars in the image has to do with sampling rate and how well it is matched to detail present in the image. Detail in the image will depend on several factors combined - size and quality of aperture, seeing and mount performance. Once you reach the limit set by detail in the image (and that is measured by star FWHM) any additional "zoom" will simply make your stars look large / bloated. That limit for most of us, on most days of the year is around 2"/px for scopes up to 4", 1.5-1.6"/px for 6" scopes and 1.4"/px for 8" scopes. On very good night, you can go down to 1-1.2" with 8" scope - but that is rare. With 100mm F/5.8 and 6.45µm size you have ~2.3"/px - that is very good match - right there close to limit of what looks good. It is no wonder you are happy with star size and that combination. With 2000mm FL and 6.45µm pixel size, you have 0.67"/px - that is at least twice as much then is realistically needed - it is like zooming in your image x2 - or x2.5 past what looks good. No wonder you feel that stars are large. 2" version - https://www.firstlightoptics.com/barlows/explore-scientific-2x-barlow-focal-extender-2.html 1.25" version - https://www.firstlightoptics.com/barlows/explore-scientific-2x-3x-5x-barlow-focal-extender-125.html However - I would not try either of those unless someone can confirm that it will work with quadruplet scope. I can't be certain that it will work properly with such scope. It will work fine with regular scope without flattener, but I simply don't know if it will work with your scope at all. There are some alternatives for you to consider - that might be better choices to get what you want. 1. Replacing your camera? One of problems that you have is that it is not feasible to bin x2 your camera as it has low pixel count. Binning will produce ~700 x 500 px image - and that is tiny by today's standards. If you get camera with larger sensor - maybe it will work better for you? Say you have camera like atik 16200 with 6µm pixel size and 4500 x 3600 px. You can use such camera in bin x1 mode on TS 100mm for about same resolution you now have - 2.13"/px, and you can bin that camera x2 when you use it on EdgeHD - to make it work at 1.24"/px or even x3 to get 1.86"/px. Even if you bin x3 - you'll still end up with 1500x1200px image size - that is larger than you have now. 2. Add scope that is about 5-6" and ~900mm of focal length. It can be refractor - something like this: https://www.teleskop-express.de/shop/product_info.php/info/p6679_TS-Optics-PHOTOLINE-130-mm-f-7-FPL53-Triplet-APO-Refractor.html or maybe even 6" F/6 reflector? (not sure if you'll like that, collimation and all)
  8. I'd say that x2 telecentric lens will work better in that regard. Barlows push focus point away from its normal position and magnification they provide also depends on their distance to sensor. Telecentric lens are better - they have same amplification for quite a range of distances and I think they minimally impact focus position (but that is something to be checked). Not sure how it will work with your 100mm F/5.8 as it is quadruplet / Petzval like and it needs sensor at exact position. Never tried that combination so I can't tell if it will work properly. By the way, I think that 100mm F/5.8 is very well matched with Atik 414ex and it's 6.45µm pixel size. Adding telecentric lens will certainly make it over sample at 1.15"/px. You also need increased aperture to utilize certain resolution and after some point - seeing simply dominates and even large scopes can't resolve. Why are you exactly struggling with EdgeHD? Is it just the size of the scope or something else (tilt, spacing, poor looking stars)?
  9. 2000mm FL refractor is going to be that long - so 2 meter long scope. Why do you need such a large scope and how do you plan on mounting it? If you are struggling with Edge - why do you think that 2m refractor is going to be easier to handle / use? In any case - here is frac that is similar in specs to edge: https://www.apm-telescopes.net/en/apm-lzos-telescope-apo-refractor-2282050-cnc-lw-ii Price - real bargain at 41K euro If you just want focal length how about using 5-6" apo with barlow lens? Why do you want two meter of focal length?
  10. My only concern is different expansion coefficients of mirror cell and mirror itself. There is a reason why mirrors are not glued to their cells - it can produce same effects as pressure on the sides. What are your thoughts on silicone glue you used?
  11. Ok, so here it is in a bit more detail. Aperture surface determines how many photons are captured. That part is clear - x4 larger aperture by size - means x4 more photons gathered (in reality that is not quite true because there is central obstruction and losses in optical train - like 94% reflectivity of each mirror and transmission of glass and reflections on air/glass surfaces - but those are nuances). Other important bit is "effective pixel surface", and when I say effective - I mean how much of the sky is covered by single pixel. This is important because all photons coming from that part of the sky - land on that single pixel and are put on the same "heap" - more photons, larger the heap and stronger the signal - better SNR (all of this is per unit time - and we can't "compress" time - it flows the same for both scopes so we don't mention it). In order to see what sky surface single pixel covers - you take sampling rate / imaging resolution which is: 3.72µm pixel size on 550mm FL and 3.72µm and 900mm FL which give as you already mentioned: 0.85"/px and 1.39"/px And we simply calculate how much there is sky surface in square that is 0.85" x 0.85" and 1.39" x 1.39" and take ratio of those surfaces. 1.39 * 1.39 / 0.85 * 0.85 = ~2.67 (without 0.9 reduction it was 3.3) Fact that you are using OSC sensor - DSLR does complicate things a bit, but yes, you are right - you should bin x2 (or use super pixel debayer) in each case. 0.85"/px is oversampling for 8" scope and 1.39"/px will be oversampling for 4" scope under most circumstances. You can and should bin both by x2 (or use super pixel mode) - but that won't change speed ratio. What I'm saying is - imagine for example that you have x1.1 instead of x0.9 coma corrector - so you are working at 1100mm FL with 8" scope. Now you have 1.4"/px for Esprit and 0.7"/px for 8" Newtonian. You bin x2 Newtonian data, but leave Esprit data as is and you will get the same sampling rate / same resolution (word resolution is used in many contexts and here I use it to mean sampling rate and not total pixel count). In this case - you will end up with same sampling rate, potentially the same level of detail captured (seeing dominates difference in aperture) but SNR will be different - and it will be different by exactly the difference in aperture surface - because "pixel sky surface" will be the same (same sampling rate) - so that ratio will be 1 and will not "contribute". Hence expression for speed "aperture at resolution" - because if you fix your resolution (again - sampling rate and not pixel count or other meanings of word resolution), it is only difference in aperture size that counts. Of course - there will be difference in one more aspect - FOV captured with Esprit will be x4 larger because focal length is twice shorter. If you want to capture that large FOV - esprit is actually not slower at all but equal. In order to capture that larger FOV with 8" scope - you'll need to do 2x2 panels. and you can therefore only spend 1/4 of time on each panel - so that cancels with x4 light gathering surface. What small scopes are good for - is wide field stuff. There they are not slower than large scopes - but if you want to match sampling rate and you can fit whole target in FOV with larger scope - then large scope wins. Makes sense?
  12. No. With 100mm of aperture you will need x4 imaging time to match SNR of 200mm only if you image at same resolution. Imaging resolution depends on pixel size, focal length and any binning used. Speed of the system can be described as "aperture at resolution". When comparing two setups - you need to compare them by value obtained with aperture surface and "resolution surface" - or how much sky surface each pixel covers (real or binned one). Multiply the two to get number that you can use to compare things. To put things in perspective - since you'll be using the same camera (one from your signature?) Esprit 100ED has four times less aperture (by surface) than 200PDS, but pixel size covers x3.3 more sky area (1000mm / 550mm or 1.818181... squared). In the end Esprit 100ED will have 3.3 * 0.25 = ~82% of the speed of 200PS.
  13. Probably because they have new version: https://www.firstlightoptics.com/uv-ir-filters/baader-cmos-optimised-uvir-cut-and-l-filter.html But, why do you need 2" version? 1.25" is sufficient for 23mm sensor diagonal. Also, no need to break the bank - maybe ZWO 1.25" will suffice: https://www.firstlightoptics.com/uv-ir-filters/zwo-1-25inch-iruv-cut-filter.html
  14. Not having IR/UV cut filter will make colors washed out / unsaturated (even if colors are properly handled). This is because QE in IR part of the spectrum tends to be similar for all three components of bayer matrix - R, G and B, so each pixels gets added same value in RGB triplet - like adding some amount of white (1,1,1) - that de-saturates colors. Add on top of that non linear stretch that also tends to de saturates colors - and you'll end up feeling you don't have enough color in the image.
  15. It would be helpful to determine if you are lacking inward or outward focuser motion. You need strong star to do that and low power eyepiece. If you insert your eyepiece and point a scope to a strong star (you need star to be strong as image will be out of focus and light spread around) - it should present a "doughnut shape" - a bit like this: but probably fainter. Now you need to determine direction in which diameter grows and in which it shrinks. Move focuser in/out and see what happens to the size of this doughnut. If it shrinks when you are moving focuser inward (eyepiece gets closer to tube) and then stops when you can't move focuser inward any more - then you are lacking inward focuser travel. if it shrinks when you move focuser outward (eyepiece gets away from the tube) and then stops when you can't move focuser outward any more - then you are lacking outward focuser travel. In case you are lacking outward focuser travel - you can solve it by adding extension tube. Too long extension tube can create problem as well - it can move eyepiece past focus point and with long extension tube - you can suddenly be in situation that you don't have enough inward focuser travel. In this case - use shorter extension tube. In the case you are lacking inward focuser travel (without using extension tube) - well, you'll need to modify your scope a bit more. You need to shorten the telescope tube itself and move primary mirror closer to secondary mirror. Look at this video to see how it can be done: https://www.youtube.com/watch?v=u_ufa_RcQaE You basically drill new set of holes on the bottom side of the scope and cut away part of tube itself (about 20-30mm). Just be careful to remove / protect mirror parts when doing that (video shows this as well).
  16. You can bin in firmware - on camera, prior to download - that speeds up download (as if that is needed) and reduces file size (this can actually be beneficial), or you can bin in software after calibration, or even after stacking - while stack is still linear. Binning can scramble color information - if not done properly. Software needs to be aware that it is binning raw color data and then it will bin it properly - but just simple "mono" binning will destroy the color information. Yes, you are right - difference in hardware vs software binning (CCD vs CMOS) is just amount of read noise that you get. For CCD it remains the same and for CMOS you get increase by bin factor (2x read noise for bin x2, 3x read noise for bin x3, etc ...).
  17. Hi and welcome to SGL. In all likelihood - your focuser is "too short" - which is a good thing as it can be easily fixed by just using extension tube. You can compare the two by measuring how far away is EP holder located with respect to base. 2" focuser is likely shorter / low profile. It is made that way to allow for DSLR to come to focus (DSLR has 44mm of additional light path inside camera body). Telescopes with low profile focuser are often visually used like this: Note 2" extension tube that places eyepiece at needed distance in order to reach the focus. Try not putting eyepiece all the way in the focuser and seeing if this will let you focus on a distant object. You don't have to wait for night and clear sky - you can try focusing on distant mountain top or tree far away. Just note - closer the object is - more outward focus travel it will need to reach the proper focus.
  18. Actually main advantage is the same - increase in pixel size and hence SNR per sub. Difference is only in how the read noise is treated and read noise is dealt with by setting appropriate sub length.
  19. You can bin OSC CMOS sensors as well - just keep in mind that you increase read noise. Say you bin ASI2600 x2 - it will no longer be 1.5e read noise camera - it will be 3.0e read noise camera, and that only impacts your exposure duration - as you want individual exposures to swamp read noise. Oversampling will not cause any problems - except it is a waste of SNR. As long as you can bin your data in software afterwards and adapt it to actual seeing conditions (and pay attention to read noise thing) - you can recover lost SNR. I happily use ASI1600 with 3.8µm pixel size on 1600mm of FL (RC8) - but I never tried to process image at native resolution - I always bin at least x2 if not more.
  20. I know, but making them round in software seems a bit like cheating
  21. It's not just the seeing. It is combination of seeing, mount performance and aperture used. If we compare 72mm aperture and say 200mm aperture in Airy disk diameter - you'll see quite a difference there. First has 3.56" while second - 1.28" Airy disk diameter. First one is comparable to 1.5" seeing alone, while second corresponds to 0.5" seeing.
  22. Odds are - you are not going to get more detail with 70mm scope. What is your average guide RMS on EQ6R? In average / good conditions with 2" FWHM seeing and 0.8" RMS guiding, you can expect about 3.2" FWHM stars - that corresponds to 2"/px sampling rate. With both cameras that is about 380mm of FL (unless you bin). That is about best in terms of detail that you can achieve give those conditions and 70mm scope. 840mm FL is of course overkill. 200mm scope, given same conditions will allow for 1.75"/px. ASI1600 bin x2 will give you close enough - 1.57"/px I'd go with that combination. If you feel that you'll get wanted detail, don't let my above post discourage you.
  23. Out of interest, you seem to want to do 2x2 mosaic and plan on doing that with 72ED and x2 barlow? How about simpler approach? Loose the barlow and do regular image?
  24. What you see published is FPS for whole sensor - which is not unreasonable given that you need to download 3840 x 2160 pixels or about 8.3MP. ASI178 for example has 30 FPS for whole sensor read out - again, that is sensible as it is 6.4MP. In fact - if you do the math 6.4 / 30 = ~0.21 8.3 / 39 = ~0.21 Same readout rate. You can however use 8bit readout and / or ROI to achieve high FPS. I have no trouble doing that on ASI178: above is screen shot of expected FPS based on selected bit mode and ROI. I believe same to be the true for ASI485 - although ZWO did not publish expected FPS like for every other model (maybe they were in a hurry to release the camera and forgot to measure / update website?), I think it is the same - use ROI - high FPS. QE. You want as high QE as you can get for planetary imaging (well, for all imaging, but particularly for planetary imaging as you are limited in exposure length). ASI462 has high QE in IR part of the spectrum: Say that you have absolute QE of about 80-85%. Green peak in part of spectrum used for normal imaging (not IR) will be about 84% * 85% = 71%. Half of your pixels will have QE of 71% and blue will be worse still peaking at about 68% * 85% = ~0.50% ASI462 is surely fine camera, but I would use it if I were interested in IR above 800nm. If not, I think ASI224 will be better (it surely is proven performer).
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.