Jump to content

vlaiv

Members
  • Posts

    13,263
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. Yes, be careful about those as these are not very suitable for EEVA / live stacking. This is the same as your regular full HD web camera with 1.25" attachment. Issue with such cameras is that they are capable of full HD video stream at USB2.0 speeds - which means that they use heavy compression and can't record raw data. Such image/video compression causes artifacts and both degrades image quality and does not let you exploit stacking properly (which needs raw data to work the best). Here are examples - first done with modified web camera (which cost me half the price of that HD eyepiece - I just added piece of 1.25" (or rather 32mm OD sanded a bit down) PVC pipe as nose piece): vs This one is done with ASI120 equivalent (QHY5IILc) Same telescope, same capture and processing style, less than month apart - different camera - both are best samples achieved. Due to compression, detail is missing in the first and when I tried more wavelet processing - I could clearly see Jpeg style artifacts popping out in the image.
  2. Exactly - so you rotate it 90° and you get more space left/right - if that suits your target better. First try those orientations and only use "diagonal" if you can't fit your target in up/down or left/right orientations.
  3. Military / hunting night vision gear like this: it is used to amplify even small amounts of light. It is placed on regular eyepiece and it amplifies light that you would otherwise struggle to see...
  4. Not in the same sense using dew shields that are electrically powered not EEVA. You are still hit by original photons from the target - regardless of the fact that Mica crystal filter needs to be kept at certain temperature in order to operate on band.
  5. Well, at first it was just called EAA and at that time I would agree with you, but subsequently it was renamed to EEVA - precisely to include night vision devices which are electronic in nature and in the same way you are watching computer screen - and small display in devices like evScope - you are looking at phosphorous screen and not actual target with night vision device. Does it require electricity to be able to observe (but not for tracking - for actual observation)? Then it is EEVA
  6. This is true for refractors and SCTs/MCTs/RCs - but not necessarily so for newtonians and hyperbolic astrographs that mount camera to the side. With such scopes orientation depends both on camera orientation with respect to OTA but also with rotation of OTA with respect to rings - and it is much harder to figure out orientation before you take test image. You can only be certain that you'll align sensor with RA/DEC if you align it to OTA and put it in one of four positions with respect to dovetail bar - top, bottom, left, right. Even then it is really tough to figure out if you have your sensor longer side aligned with RA or DEC.
  7. If you don't have issue with framing and you can fit whole target in the frame with many orientations - choose either landscape or portrait orientation. There are several reasons why orienting camera sensor with respect to RA/DEC axis is beneficial. - if you have slight issue with guiding / tracking and you get your stars somewhat elongated - direction of elongation will tell you where the issue is and also - if you have star elongation in horizontal or vertical - eye does not notice it as much as in the case it is at an angle - it is much easier to repeat such orientation than is arbitrary angle. When you want to align your sensor to RA or DEC - you can check your orientation with a single exposure. Just start exposure with bright star in frame and slew telescope in either RA or DEC. You will get star trail in the frame. Is it horizontal or vertical (depending if you've chosen landscape or portrait)? You are properly aligned. Is at an angle? you need to turn your camera more. - when doing mosaics - it is easier to calculate RA/DEC of each panel center (although decent mosaic planing software will handle any orientation).
  8. ISO (gain and other settings) at which flat is taken has nothing to do with settings of other subs except for flat darks. These two need to match in everything (only darks taken with scope covered). Duration of flat exposures will depend on several factors. Most important being to get good exposure that is in linear range of sensor's response and with strong enough signal. Here good rule to follow is to get histogram peak at around 2/3 up to 3/4 of histogram. With color cameras - make sure all three peaks are in histogram and that right most peak is hitting 3/4. Other thing to worry about is presence of mechanical shutter. If you have mechanical shutter on your camera - you want longer flat exposures (dimmer light panel). This is to avoid gradient caused by moving shutter when exposure time is not long enough in comparison to shutter opening/closing time. Other than that - you can use exposure length that suits you. I use exposures in 2-5ms range for color filters as I have very strong flat panel and cmos sensor with rolling electronic shutter (not mechanical one).
  9. When using such filters with OSC camera - you are really just recording blue and red color. There is little difference in Hb and OIII wavelength as one is 486nm and other is 500nm. In ideal world here is what you would get: as these are the colors in RGB color space that are closest to actual red of Ha/SII (there is no difference between the two in color - SII seems a little darker to us), Navy blue of Hb and teal of OIII. However, due to raw color space of OSC cameras and absence of color calibration - colors will often be a bit skewed with respect to above and there might even be less distinction between Hb and OIII - just blush color. There is no way with these filters to separate actual colors Ha and SII signals will blend into red without ever having chance of separating them - and often Hb and OIII do the same - they create sort of blue/teal tone and that is it. if you want to achieve look of Veil nebula on other nebulae - you'll need to separately process blue, red and green channels as often OIII signal is much fainter than Ha signal and needs to be boosted considerably.
  10. Many people feel similarly, but important thing to understand is that EEVA is not trying to remove / replace traditional observing. It is just another way of having fun under the stars and if it's not your cup of tea - that is perfectly fine. You should not feel pressured to try it if you don't feel like it and many people can enjoy both - visual and EEVA. I know I do - I enjoy both imaging and visual and am really happy to play around and think about EEVA as well. I don't consider myself to be primarily visual or primarily imager or whatever - just enjoy activities that interest me.
  11. I have an idea that I would like to share. It is far from usable, but with a bit of effort it could turn out to be EEVA on budget for masses. I already fiddled with this idea in a different form - using afocal method for EEVA. In that instance I was calculating reduction factor for dedicated astronomy cameras, and I'm yet to test it out with ASI178 and Mak102 with 32mm eyepiece and 12mm C/CS mount lens. In this instance - we would be using phone as EEVA imaging device. Adapters to attach phone to eyepiece are readily available and cheap - they can even be 3D printed. Let's do some basic calculations to see what sort of results we might expect. I started with this image: This is hand held Astro Essentials F/4 32mm guide scope with 32mm GSO plossl and Xiaomi A1 phone (phone also hand held). Image is of course neighbor's brick wall - but this image prompted me to think of focal length of lens in phone cameras. What sort of focal length / sensor size and pixel size we are talking about and how does that translate into arc seconds per pixel? It turns out that things match pretty well. Simple Plossl eyepieces have AFOV of about 50-52°. Most phone cameras are going to be made to match ~35mm lens - which is general wider field lens for photography. That lens gives about 63° FOV. Here is first tool - FOV calculator: https://www.pointsinfocus.com/tools/depth-of-field-and-equivalent-lens-calculator That means that phone FOV will be pretty good match of plossl eyepiece AFOV - which is good thing, we will be able to utilize most of the field of view. Next step is to find out camera specs for your particular phone model. Mine current is Xiaomi A1. At gsm arena - https://www.gsmarena.com/ you can find specs for your model. This model has two sensors - one is 26mm lens equivalent with 1.25µm pixel size and 1/2.9" format, other is 50mm equivalent with 1.0µm size pixel (no mention of sensor format). We are for the time being interested only in equivalent focal length. This is not true focal length as 26mm with 1/2.9" sensor would give extremely small FOV. According to above calculator of FOV, 26mm lens will have 79.5° - that is larger than 50° of plossl so there will be some vignetting - and it shows in above cropped image. Here is what image at eyepiece looks in full format when phone is carefully placed (above is at some distance so it is smaller): And it looks like eyepiece FOV is about two thirds of camera FOV (by diagonal). That matches pretty good with 79.5° : 50° ratio that calculation gave us. I just measured size of eyepiece FOV in original image and diameter is ~3070px. Camera is 4000x3000 so diagonal is 5000px. 80°:50° = 1.6 and 5000:3070 = 1.63. We have excellent match between the two. This gives us way of determining sampling rate. At the moment, sampling rate is 50° / 3070px = 0.01628°/px, but remember we don't have magnification yet as eyepiece is not placed on telescope. I'll be using Mak102 to test this out and with 1300 mm / 32mm plossl it will give me = x40.625 magnification. So we need to account for that in above formula and we need to divide above sampling rate with magnification. 0.01628 / 40.625 = 0.0004°/px = 0.024'/px = 1.44"/px - and usable 3000px in diameter (which can be furthered binned x2 or x3). What actual FOV will it give? Surprisingly nice, like this: almost 1.3 degrees of sky. Compare that to something like 130PDS and ASI224 - often used as EEVA platform: with ~1.2"/px sampling rate. Only thing that is left to do is to see how to use phone for EEVA stacking. I can't speak of IPhone line but Android phones have something called Camera2API that has been around since 2015. It allows for raw camera data to be captured - exactly what we need for EEVA. Problem with this is - phone manufacturers decided that it is excellent selling point and they don't have it available on lower priced models - often reserved for flagship models - although Google specification meant it as generally available API. In fact - most phones are capable of doing this but are purposely disabled in software. My phone is similar in that it is capable of supporting Camera2API but it is disabled and I need to root my phone in order to enable it. Luckily I don't have to recompile or install patch to system itself so I get to test it. This is really annoying as I got this model because it is part of Android One initiative - it has stock android and should support most of android features - it is used by developers because of best compatibility. I think that best course of action would be to develop small client server system. ASCOM driver (or INDI) that would work on computer and accept network connections and small client app on mobile phone that would operate camera and stream result over network to server application on computer that would emulate physical camera device and let ASCOM / INDI compatible software use resulting images. Well - that is the idea
  12. Sharpcap is one example of a good EEVA software for DSO. In fact - I would say it would be my go to software for EEVA as it allows for both: - live stacking, which is basically EEVA for deep sky objects - live view from the camera that is very basic version of planetary stacking. In both cases there are features that I would like to see implemented for both styles of observing. For deep sky observing I would like to see: - plate solving integrated - after you start doing live stack, I would love that image is plate solved in the background and I can click an object and get at least basic info on it - coordinates, type, catalog name - a sort of thing that you get in Stellarium when you select an object - Automatic processing of the image with advanced algorithms - there are plenty of advanced algorithms that people use when processing image - why not have that automatically applied to live view (with few sensible controls) - color calibration, smart denoising / sharpening, good automatic stretch based on noise statistics - so you can always have best image without too much noise (or couple of presets - like nice, some noise, show it all ). - Sampling rate adjustments / binning with zoom pan controls. Imagine you have camera that has 5496 x 3672 (well, you don't have to imagine, right? ) and you can select if you want to view image as 1374 x 918 - which is most zoomed out / the least magnification, but best SNR as software automatically bins image to that size, or perhaps you want to see image at 2748 x 1836 with view port of 1300x900 that you can pan around - medium magnification and you can move around and examine things in more detail, or you want to have even higher resolution of 5496 x 3672 - highest magnification and again 1300x900 window to pan around even more. Add to that fractional binning and you can have multiple zoom levels with different level of noise. You start zoomed out and as the image builds up and enhances - you can start magnifying it to see smaller and smaller detail - multiple output: so you can work at laptop and have actual live view displayed on projector or streamed via internet to be shared with friends / family - multiple input: maybe you have two telescopes side by side and you want to use one mono and one color camera at the same time - dithering - some people have good mounts and would like to dither randomly between each exposure - this is used in imaging to reduce noise as fixed pattern noise is spread around and shuffled. Otherwise on less good mounts this happens naturally as tracking is not as good and image slowly drifts, same for AZ mounts where there is rotation of the field - but it is better to control this process and make it truly random. For planetary - I already outlined what should be incorporated - basically the lucky imaging work flow while still retaining live feed capability and ability to set output fps (like real time at 20-30fps, near real time - 10-20s, slow mo - every minute or two).
  13. Out of interest - what was wrong with attaching phone to the telescope? After participating in this discussion I took some interest in the topic of using mobile phones as a platform for EEVA and while reviewing little finder/guider scope that just arrived - I realized that quite decent image can be had with mobile phone attached to the eyepiece properly. Here is image that I snapped hand held - both finder scope and my mobile phone. It looks decent (that is brick wall of neighbor's house).
  14. Yes, ideally you would want to do it with application that has ability to enhance the view for you. As an example - human eye "records" around 30fps - which is equivalent of 33ms exposure. Any seeing influence would be averaged over this period and we would see blurred image (as we often do when observing visually). Ideally, planetary EEVA application would still capture image at lucky imaging speeds - more like 5-6ms rather than 33ms and would let you choose: - enhanced real time view Here app would take 5-6 successive frames, combine them (align + stack), do a bit of sharpening (small amount or otherwise noise would amplify significantly) and a bit of denoising - and display live feed at speeds that we are used to visually - like 20-30fps. This would somewhat lessen atmospheric impact but would still be considered real time - extra enhanced view Here app would take much more successive frames, do the same processing - stacking, sharpening, denoising - but "deeper" to create more stable and better looking image and update view every few seconds. This would let you examine details more but would still allow you to follow changes in "real time" - like Jovian moon/shadow transits and similar that happen over dozens of minutes Of course there would be "rec" button that would allow you to save raw data and process even better afterwards. At the moment, primary issue I see with EEVA in this department is lack of dedicated software with these special capabilities - but as long as people are interested, software support will get developed.
  15. I don't think it is clearly defined. It would more be "observing style" / "astronomy practicing style" than anything else. Imagine you have two approaches to astronomy so far: - Without computers This style has been extensively used for centuries - it consists of observing, sketching, various measurements with analog devices (clocks, angle measures, etc ...). Many aspects of this style are still practiced in what we call visual astronomy - either just observing for joy or maybe doing more serious work like timing doubles periods or variable star periods, etc ... all with analog means. - With computers This has grown into main stream professional astronomy and most of the time astronomy is practiced without being close to telescope at all. It is all robotic - images are taken or other data gathered via surveys and then analyzed "offline" - months after it has been recorded. Most imagers end up with similar style of work to this - robotic sheds / observatories where you program sequence for image acquisition and later you process your data. - EEVA is sort of mix of these two styles - it provides means to do "analog" approach - but with use of electronic. It is in real time / connected to sky and equipment but you still use electronic devices to enhance what ever you are doing. It can be image amplifiers / night vision that will allow to see deeper visually, or it could be computer and camera that will do the same except you are not at eyepiece - but it is still enhanced and real time. Nice side of EEVA is that it can be practiced in groups. Sure you can observe in a group but it is still somewhat isolated experience - everyone is having their own experience at an eyepiece even if all are observing the same target. With EEVA - you can put up a screen / projector and image that is captured in real time can be viewed by multiple people at the same time. It becomes a bit more shared experience. Does this help?
  16. That one is actually rather interesting story - and not yet settled It is the Mizar of the two that is double system (in fact both appear to be quadruple systems - Alcor spectroscopic quadruple and Mizar double of two spectroscopic doubles). Alcor and Mizar indeed form naked eye pair (but I believe that they are optical pair) - but Mizar is also double star visually (real double). Splitting Alcor and Mizar by naked eye is not much of a feat, but splitting Mizar with naked eye - certainly is. I've heard the story that it was in Arabic ranks that it was used as test for exceptional vision. Given that they are separated by about 1/4 of arc minute - it is really on threshold of vision of someone with exceptionally sharp eyesight. I've read once discussion that emphasized desert as setting for conducting these tests - desert can have exceptional seeing and transparency and it is just possible to split them visually if one really has good vision.
  17. You named the main two - and you are not too bothered by either Focal reducers do just that - reduce focal length. Sometimes field flatteners also act as focal reducers. These flatten the field and remove aberrations further from optical axis. FF/FR for ED80 does both - it reduces focal length and hence increases FOV and also fixes optical issues that you would otherwise crop away. There is one more thing that is important if you are only introducing FF/FR into optical train and keeping both scope and camera as they are at the moment. Resolution reduces and "speed" - or Signal To Noise ratio achieved in given time - increases. Different way of thinking about it - you can get as good image in less time but at lower sampling rate.
  18. Rather simple thing really - you look at the binary pair of stars and if you see them as two separate stars - then you managed to split them In fact there are nuances on how dark you can get the sky separation between them - and to achieve a clean split - you need to have at least dark line between them. This is usually done at high power (like planetary and lunar). Here is a good diagram Although there are some other terms like: For me - it is the point when you are confident that there are at least two stars there. Depending on your aperture, you'll be able to split stars up to certain separation. Good guidelines are Rayleigh and Dawes criteria. One uses 138 (mm) or 5.42 (inch), the other is a bit more tight with 116 (mm) and 4.56 (inch). Say you want to know if you should be able to split certain pair of stars. Take your aperture in either mm or inch and divide into given numbers above. 90mm scope should be able to resolve according to Rayleigh 138 / 90 = 1.5333" of separation (that is arc second) You then take Stellarium, find some double stars and note their separation - if it is larger than above value - you can try to split it (you can try to split even those that are at this value or slightly below to see what happens) Just be careful that you need very good seeing for this - similarly to planetary and lunar observation. It is best if pair is high in the sky to minimize seeing effects.
  19. For planets - that is excellent option. You won't be able to fit the moon, with or without reducer on ASI120 - it will be just too much zoomed in. You won't need barlow on the planets - so that is a plus. Here is what you can expect to see at one time - green without reducer, blue with x0.5 reducer This scope will be much worse option for deep sky objects EEVA / live stacking, even with DSLR due to too much "zoom in".
  20. I would recommend second hand DSLR as means to get into EEVA - on deep sky objects. That is much more fiddly proposal than getting dedicated astro camera as you need to use specialist drivers to make DSLR (connected with USB cable to your computer) act like dedicated astro camera. It is also much cheaper as you can get second hand Canon 450D for less than £100 and suitable astronomy cameras start from £500 and upward (those that are less expensive will be same as ASI120 for DSO use - very small field of view - so you might as well use ASI120 instead for that as well). You also need to install quite a bit of support software - ASCOM and such. In any case, here is ascom driver for Canon DSLR cameras: https://github.com/vtorkalo/ASCOM.DSLR/tree/master (there is pre build exe in file list, so you don't have to build it yourself - there is also link to install procedure description).
  21. If planetary / real time is what you are after - than ASI120 will be sufficient for everything but putting whole Moon into frame. Planets are really small and will all fit in field of view of ASI120 or similar camera with room to spare. Unless you have plans on expanding further into EEVA and trying to capture deep sky objects - ASI120 will suffice. Maybe think of adding 1.25" x0.5 reducer for when you are "looking" at the moon. In fact, two accessories that I would recommend with ASI120 for watching live stream of the planets would be: 1. x2 barlow when observing anything but the moon 2. x0.5 focal reducer when observing the moon https://www.firstlightoptics.com/reducersflatteners/astro-essentials-05x-1-25-focal-reducer.html With it you should be able to see the whole moon in frame:
  22. Have you ever seen 12" telescope in person? Telescopes tend to be much larger in real life than on photos. Maybe best option would be something like 10" dob with goto? Large enough but still - small enough to be transported to darker skies for observing? https://www.teleskop-express.de/shop/product_info.php/info/p4338_Skywatcher-Skyliner-250P-Synscan---10-inch-f-4-7-GoTo-Dobsonian-reflector.html Or if you really want 12" option - it is now in stock: https://www.teleskop-express.de/shop/product_info.php/info/p4337_Skywatcher-Skyliner-300P-Synscan---12-inch-f-4--GoTo-Dobsonian-reflector.html However, do understand that such scope assembled weighs as much as a grown person! This particular one has about 60Kg when assembled. Take a look at this video to get the idea of size and bulk of this scope: https://www.youtube.com/watch?v=asel5OgDljM (this is solid tube version - collapsible version is easier to store and transport).
  23. Your initial premise is certainly valid and has been done before - yes you can have virtuoso mount controlled via either SkyWatcher dongle or DIY Arduino or similar board that does the same for much less money. Choice of telescope is also good - I would go with 102mm version instead of 90mm if it exists on Virtuoso mount as it has larger aperture by almost no increase in focal length (1250 vs 1300mm). For visual it will be good. I doubt that you'll be able to do anything photographic with it apart from the moon with mobile phone and adapter. Virtuoso mount is probably only sufficient for visual tracking and probably does not have enough precision for photographic tracking. Even mounts costing more like AzGTI don't have enough precision in comparison to mounts usually used for astrophotography. For example AzGTI has stepper resolution of 0.625". EQ5 mount has stepper resolution 0.287642" and HEQ5 has twice precision with stepper resolution of 0.143617" per step (same as EQ6) In all likelihood Virtuoso will not have resolution at AzGti level. Indeed, I managed to find the specs for it on FLO website and this is what it says: DC Servo Motors, Driving Resolution 2.42 arc sec Sidereal rate is ~15"/s. This means that mount will "tick" at 6.2 times per second making 2.42" jumps instead of what should be smooth motion. Fact that it is Az mount means that it will have variable speed in both axis (unlike EQ that should stay put in DEC and only track with constant rate in RA) and tracking will be very jumpy. This will create a lot of blur in longer exposure photography. If AzGTI is capable of providing imaging platform for about 3"/px resolution - maybe Virtuoso can be used for resolutions of about 10"/px or so - which is 50mm lens territory and certainly not 1300mm FL length (or 1250mm with 90mm model). In the end, I would advise you to maybe think about this setup: https://www.firstlightoptics.com/sky-watcher-az-gti-wifi/sky-watcher-skymax-102-az-gti.html It will work as wifi out of the box. It will offer you telescope version that is 4" so decent aperture - will show wonderful view of the moon and planets, has a lot of potential for different kind of imaging. You can get second hand DSLR and lens and use AzGTI mount in EQ mode. I've done that - you just need a wedge (simple pan/tilt head can be used for this - I used ball head before I got regular Skywatcher wedge) and counter weight ( M12 threaded rod and simple counter weight is all you need). You can use it in Eq mode for EEVA with DSLR as well - also called live stacking. This will work even in Az mode. You could try that with phone as well. Think about that possibility.
  24. I'm not sure what you are asking here, but here is breakdown of what happens: If there is read noise in camera - only noise source that adds per exposure and is not dependent on exposure length - stack of few longer exposures will have better SNR than stack of many shorter exposures both summing up to same total exposure time. This is because all other noise sources depend on time and stacking them is no different then exposing for longer. Only one that does not behave like that is read noise. It has the same intensity regardless if you expose for 0.1s or 100s. Just how much difference there is between few longer subs vs many shorter subs totaling same overall exposure time depends on few factors - biggest one being ratio of read noise to other noise sources combined or rather biggest of rest of them in simpler case. Other factors include stacking algorithms used - sometimes sigma clip produces better results than regular average and it works better when there are plenty of samples to establish statistics. There are other things that play in short sub favor - one of them being how well your mount guides. If your mount guides well for 5 minutes - that does not mean that it can go all night long as there are factors that are on larger scales than that. What if you have differential flexure that becomes obvious on exposures longer than 5 minutes? So, guiding/tracking can and will set upper limit. Second thing is wasted subs. What if something interrupts your imaging session - a bird landing on your scope, sudden gust of wind, neighbor turning on their light, earthquake (yes that does happen), ... You can either loose 20-30 minutes of imaging or loose 4-5 minutes. This is down to how often you expect for odd thing to happen. Choice of exposure length is ultimately a balance, and yes you should go with as much exposure length as it makes sense with all above criteria.
  25. Indeed it does: Dynamic range per exposure is completely irrelevant. Olly already mentioned one way to deal with saturated stars - another is to take very small set of very short exposures and use that for saturated parts of main image. These short exposures will capture bright parts of the image with very good SNR (as those are bright and have plenty of signal). We increase overall dynamic range of image by stacking, so single exposure dynamic range is really not important for final result. That is why 12bit cameras work every bit as good as 14bit or 16bit.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.