Jump to content

vlaiv

Members
  • Posts

    13,265
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. Not sure if eq6 pro is main competitor to this mount. At 5kg it is more geared towards mobile imaging? In that case, we are looking at lightweight mount in that price bracket that is better suited for that role? How about this? https://www.firstlightoptics.com/computerised-goto-astronomy-mounts/ioptron-gem28-ec-german-equatorial-goto-mount-with-encoders.html It has encoders so you don't need to guide in the field?
  2. I theory you could do that - but it's not stars only - everything gets a bit blurred by bilinear interpolation - that means stars are a bit fatter but also a bit of detail in target (galaxy, nebula) is lost. In my view - it is best to use advanced interpolation algorithms that produce sharpest results. Denoising can later be employed selectively to deal with noise. To be honest - I have no idea. I guess people that were once interested in high performance software are aware of different optimization techniques but software developers in general might not be. Not that I'm aware, but you can see some differences in post on related topic I made some time ago (it deals with types of interpolation when aligning images):
  3. I'm not overly impressed. 20Kg capable mount with 30" P2P for ~£2000 - I think I've seen that before At least it is light weight at 5Kg.
  4. UHC filter is probably the best / most versatile. There are other filters like OIII that are better on some targets - but UHC is most useful overall. It is good for all emission type nebulae.
  5. That really depends on your mount, but I'd say that good rule is that you need at least 1/3 precision of what your mount is capable of. To cover all cases - you need guide precision of less than say 0.1" as it is unlikely you'll be guiding at 0.3" RMS unless you have mesu or other high end mount. In order to go from that number to pixel scale - you simply multiply with 16 (it is said that centroid is precise down to 1/16th - 1/20th of a single pixel - so we take upper estimate of that). You need 1.6"/px or less. With 1600mm of FL - that is not going to be a problem. With 3.75µm pixels - you can bin x3 and still be lower than that number. With smaller pixels - you can bin even more. In fact - for that OAG, I'd recommend going with 3.75µm pixel size and not smaller (like 2.4µm of 178 or 2.9µm of 290). That is also nice pixel size for planetary role as it requires F/15 as optimum F/ratio and that is something that is easy to get with "regular" scopes - like x3 barlow for F/5 or x2.5 barlow for F/6 or x2 barlow for scopes in F/7-F/10 range (adjusted for distance so you end up with F/15). With some maks - you are already at F/15 (like Mak150 and 180 and some Bresser offerings). Luckily two best planetary color cameras - ASI224 and ASI385 have that pixel size.
  6. I think USB 3.0 camera can work on USB 2.0 hub - it will only use USB 2.0 speeds for data transfer. That is how I use ASI1600 and ASI185. First has USB hub, and second is USB 3.0 camera, but can be used on USB 2.0 port. @Newforestgimp Why is ASI120mm not working for you in OAG role? I'd consider ASI385mc that is not on your list. It is as good planetary imaging camera as ASI224 (can be thought of as "bigger brother" to ASI224 as it has same spec but more pixels), it is big enough to be used with OAG (in fact I have vignetting on my 8" RC and OAG on similar sized chip of ASI185mc). Which ever camera you choose, when using OAG, you'll need to bin it as it will be working at 1600mm of FL - and you don't need that much guide resolution. I bin mine x2 for resulting ~1"/px guide resolution.
  7. For APP, I think that bottleneck is memory management under Java, or rather automatic memory management on what are in general very memory intensive algorithms. @vineyard Differences in results will be due to algorithms used. Choice of interpolation algorithm can have significant impact on result. Bilinear interpolation will reduce noise but it will also create softer stars - it is like adding a bit of blur to the image. Sophisticated interpolation algorithms add the least blur to the image - which results in best stars but also the least noise reduction due to said blur. Another thing is choice of stacking algorithm and normalization method. For example - straight sum / average does not need normalization to be performed and is best for comparison purposes. Sigma reject needs normalized frames and depending on type of normalization and often changing light pollution gradient - it can reject too much data. In order to most efficient - normalization used needs to deal with LP gradients and align them. Choice of stacking method will also have significant impact on performance of software. Straight sum / average should be the fastest, and sigma reject can incur quite high cost on memory management - as one needs complete set of samples for resulting pixel value. Here some clever optimizations can work miracles (like not trying to stack whole image at once but doing it scan line, or several of them at the time - depending on available memory). With Java software there is interesting optimization that is often used when programming games (that need performance) - that is leaving memory allocated and then reusing buffers as to minimize memory management overhead.
  8. I have 11mm and 6.7mm and although ER is listed at being 15.5mm give or take - it is no where near that - at least that is my subjective feel. That is probably only shortcoming of ES82 11mm - recessed eye lens and the fact that I often touch eye lens with my eyelashes and leave grease that later scatters light and need to be cleaned off. Due to construction - 12mm Plossl somehow feels more comfortable to view thru (smaller eye lens and no chance of ramming eye lashes into it) and it has probably around 8mm of ER.
  9. Well, if you get eq platform, then you can choose any of these: https://www.firstlightoptics.com/vixen-eyepieces/vixen-slv-eyepieces.html excellent sharpness and great ER (20mm) Only drawback is AFOV but that is not issue with tracking mount If you can stretch the budget, then of course, check out these: https://www.firstlightoptics.com/tele-vue-eyepieces/tele-vue-delite-62-degree-eyepieces.html wider AFOV and of course excellent sharpness and eye relief
  10. Es82 and longest possible eye relief should not be used in the same sentence You say widest possible FOV - have you seen this: https://www.teleskop-express.de/shop/product_info.php/info/p9306_APM---Lunt-XWA-5-mm-110--x-treme-Wide-Angle-Eyepiece--2--1-25-.html
  11. Yep - and you don't even need to do it during night. Find something far away - like church tower or power line mast or whatever and figure out during the day configuration that gives you wanted magnification.
  12. There you go - good practice for measuring things with microguide Find distant object and feature on it and measure angle with microguide - then add barlow and extension until you get x4-x5 angle on that same feature. That way you don't need to measure focal length of barlow - you can simply find setting where it gives you x4 or x5 magnification you are after.
  13. That is for sure, but I do wonder if we could make a scope to throw up any sort of image - like moon at x15 magnification that shows at least largest craters?
  14. 3d print has rather rough surface and it needs to be sanded down to figure before final polish
  15. Why? I really like the idea - 100% 3d printed telescope - even lens. It won't be very useful telescope, but I find it interesting project.
  16. You need to know focal length of barlow element Formula is 1 + distance / focal_length You can calculate focal length of barlow by measuring distance at which it gives x2 magnification. Say your barlow has 50mm of focal length then it will give x2 at 50mm distance. But if you put it at 100mm distance it will provide you with x3 magnification because mag = 1 + 100/50 = 1 + 2 = x3
  17. Yes, math is correct. I have only one suggestion - use barlow on focal length of telescope rather than on eyepiece. Calculation would thus be 2x1500 / 15 = 3000 / 15 = 200 This is because barlow can also be used with camera and in some other situations and it extends focal length of telescope rather than shortens focal length of eyepiece (for example it changes F/ratio of telescope - which can be good for some eyepieces which may work good on F/10 while not being as good on F/5). Another good rule of thumb - on most nights, atmosphere won't really allow for more than x150-200 regardless of what the scope is capable of. I have 102/1300 mak and I purchased 6.7mm eyepiece to be my "high power" eyepiece for lunar. It turns out that I use 11mm most of the time and 6.7mm is just giving me too much magnification for my eyesight (image too soft).
  18. I agree with SSD being key component - so look into that first. Try different stacking software - maybe give Siril a go. APP is written in Java and Java will eat up some of resources of your computer and is not champion of speed. Take a smaller set of subs and time: Siril APP DeepSkyStacker and see if there is significant difference in speed between programs. You may be surprised there.
  19. Calibration frames will depend on your setup. If you have cooled camera (set point temperature) - you can take single set of darks - even after you've completed your imaging. You can take single set of flats - if you keep everything the same between nights - meaning you don't take apart your scope and camera, but if you do - you'll need new set of flats for each evening. One set of flat darks should be ok - if you don't change exposure length (and you should not need to). Flats are best taken after end of the session - just don't touch anything like focus or whatever. Focus needs to be as it was during imaging (focused on infinity). As far as orientation goes - you can plate solve to get exact orientation of your framing, but I prefer to orient in either portrait or landscape (with respect to RA/DEC). Even if you don't plate solve (which will give you angle - in above portrait / landscape case it will be 0° or 90°) - there is a trick to easily orient your sensor in wanted direction. Find bright star, center it, start exposure, slew in RA, stop exposure. Look at the image - bright star should make a line / star trail. Is it horizontal (or vertical for portrait)? No - rotate camera, center star and repeat, if yes - you are done - RA is aligned with horizontal (or vertical). After you have gathered all subs - calibrate each with its own set of calibration frames and then throw them all into stack for final image.
  20. I honestly fail to see why do you need convincing? Darkest natural sky on earth is magnitude 22. Craig Stark that you call upon measured brightness of M51 here: https://clarkvision.com/astro/surface-brightness-profiles/introduction.html It goes below 22 magnitude - in that crappy little image. Look at any other deep M51 image - tidal tail is easily magnitude 26-27 that is full 5 magnitudes - hence x100 fainter than darkest skies on earth. Yet we image M51 and many other fainter galaxies with ease. Just because object is fainter then sky - does not mean it can't be imaged. It is not how it works.
  21. No point in arguing "true / not true" - light is additive in nature. It does not matter if target is fainter then the sky - in fact - almost all targets that we image are fainter then the sky. Sky has brightness of 22 mag / arc second squared even at darkest locations on the earth. We image targets that are fainter than that regularly. Say your sky produces 100000 photons and your target produces 1 photon - that is 100001 photons captured by camera. That one photon won't be lost - it will be captured. Even if there is less than one photon per exposure - like 0.1 photon per exposure. That means that every tenth exposure (on average) will capture that one photon - when you average those exposures - you'll get 1/10 (one out of ten) = 0.1 again - given enough time, you can image however faint target you want, even from bortle 8 sky. Problem is that most people don't image for 1000+ hours - in fact many don't image form more than 10 hours - but you can pick up IFN from light polluted skies as well.
  22. What is sky fog limit? Here is image of SQM 26 signal taken in SQM 18.5 skies - yes, that is 7 magnitudes fainter signal then the sky - yet captured without issues. Rest of the faint outer parts of galaxy is about SQM24 - so still about 5 magnitudes fainter than sky. It works for any signal and noise - no matter how bright the sky is - it is down to SNR.
  23. Signal can be amplified very simply - just multiply pixel values with a constant and you'll get stronger signal. Say you have 0.000001 signal - well, multiply that with 1,000,000,000 and you have 1000 signal now - very strong!
  24. Apparently - one can 3d print lens both with SLA and with FDM clear filament. Sanding and polishing is still needed.
  25. This is simply not correct. It really does not matter how poor are your skies and how dim is the target. Whenever you quadruple your imaging time you double your SNR. If you take single sub and have certain SNR - then to reach any SNR you like - you need to double that first SNR enough times. Simple as that. What higher QE gives you - is better SNR for that first single sub - but rest of the story is the same no matter how good the camera is - quadruple time to double SNR (or to be precise - for any time you multiply total imaging time - x5 more, x100 more x10000 more - SNR improvement is square root of that - so sqrt(5), sqrt(100) sqrt(10000) - that is why x4 more imaging time gives you x2 SNR improvement as sqrt(4)=2 )
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.