Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

vlaiv

Members
  • Posts

    13,106
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. Could be that ZWO is no longer accepting customer orders because they stopped production, and above batch was ordered by TS some time ago. Anyway, if you want that camera, I guess it's worth checking again after 9 days if it's in stock, or as suggested above - look at other vendors offerings.
  2. According to TS, it looks like they will be getting one batch in a few days? https://www.teleskop-express.de/shop/product_info.php/info/p8515_ASI178MMC-Cooled-Mono-CMOS-Camera---Chip-D-8-82-mm.html
  3. Well, there is potential use even in LP. These devices have quite a broad spectral response, up to 1000nm wavelengths. They "convert" all wavelengths in one particular - green display, or "broadband" type - white light. But this means that you can observe Ha/Hb/OIII/SII emission nebulae without restriction. You can use new multi pass NB filters - like duoband or triband, without worrying of human eye sensitivity in particular range - it "converts" all those wavelengths in part of the spectrum where our eyes are the most sensitive. For stellar/galaxy type targets, you can use NIR part of the spectrum with usage of NIR filters, otherwise only used for imaging, since eye can't detect wavelengths above 700nm on its own. Targets that will probably see the least benefit in LP would be reflection type nebulae. This agrees with recent Stu's report on using NV gear. But those price you found are seriously steep. I managed to find couple of sources of Image Intensifier Tubes via alibaba - a few Russian manufacturers, but again no prices were given. These are earlier generation devices - like Gen 2+ or Gen 3, but have good specs (~60lp/mm, SNR of about 20) and larger diameter like 37.5mm or even larger models - which is good for astronomy usage compared to standard 16 and 18mm, but again no prices listed and aimed at military usage.
  4. One of the reasons for this thread - could we find affordable gear to bring NV to masses?
  5. Just had my first NV moment Researching further, it's safe to say that CMOS + AMOLED technology is not there yet. Display side of things is lacking - it turns out that smart watch market is interesting one for small round displays and many are available today, but unfortunately resolution offered is too small. Most displays can be classified as 300-400dpi displays. Even highest resolution displays nowadays are still below 1000dpi, let alone 3600dpi needed. To test what would image be like with current displays - I took 32mm eyepiece, unscrewed the bottom and put it against smartphone display. I even loaded M51 mono image I took some time ago to have proper "feel" of it. It indeed works, image can be seen in sharp focus, but pixels are obvious. I remember someone using this technique to judge EP quality - actual pixels can be observed when EP is used against the display. It does however feel quite "real" in terms of observing. Given proper resolution, I would be really happy with such NV eyepiece - consisting of CMOS and proper AMOLED or similar display. Back on the original topic, @alanjgreen brought to my attention that it has been covered before, and indeed, this post is rather informative: It seems that Image Intensifier Tubes can be used as starting point for constructing true astronomical NV eyepiece / filter. These are "hearts" of NV devices currently used. I wonder if such items could be purchased in retail, and what would the price be. Manufacturer mentioned for EU market, indeed has these listed on their website, but currently only military application is envisioned and no retail of these is done. For phosphorous screens, adequate resolution is listed at 64lp/mm, which translates to ~3200dpi, so above calculation is quite good - we need at least 3000dpi displays to be able to project natural looking image.
  6. I think that you are quite right in growing EAA/NV potential. I'm not sure that AP practitioner base will shrink, people do get hooked to it, but given the cost and involvement, over time, many people wanting to start "taking pictures" of night sky will turn to EAA (live stacking) rather than full fledged AP. Both NV and Live stacking need simplification / cost reduction and user friendliness to move forward at speed. At the moment I think NV is hampered both by high price and also availability and legislative. Some countries prohibit use of these devices, because they are often associated with firearms - either in military or hunting role. Dedicated astro NV device would certainly be exempt from such laws?
  7. I see couple of "problems" that need to be addressed first with above approach. 1) Should it be eyepiece, or more flexible device like a "filter". Now that @Paul73 mentioned Daystar - much like a Quark / Quark combo. Problem that I'm seeing with this is that phosphorous screen is going to be very "spherical" light source, so not sure if regular eyepieces will be able to cope with that - we all know that EPs have trouble handling fast optics. On the other hand, having integrated EP optics at EP end would reduce flexibility of such device. 2) This is in essence analogue device, I'm wondering what would be QE of such device - given photoelectric element and of course electron multiplier. If we look at above diagram for one type of electron multiplier with holes - they seem to be rather sparse and overall "QE" can't be that high. I also wonder about "read noise" of such device, or rather electron multiplier thermal noise. Is it comparable to modern CMOS sensors? If we replace photoelectric element with CMOS sensor, and phosphorous screen with some other type of monochromatic screen that gives off light in green spectrum, and electron multiplier with electronic processing unit that scans CMOS sensors and produces output on screen - would it still be the same device? In principle it would. Maybe even more sensitive than NV. Probably bulkier? I wonder if there are screens with pixels small enough to produce natural image after "EP amplification" - I guess we can calculate this from human angular resolution and apparent field of view. Let's say we have something like 60 degrees AFOV, and human angular sensitivity is about 1'. That means that we need at least 3600 pixels across diagonal for image to look natural. Putting that in one inch field gives 3600ppi display - not sure such things exist at the moment.
  8. Due to recent participation in discussions on status of NV, I had opportunity to get to know a bit better what principle of operation NV devices utilize. Current state of affairs is to use ready made NV device and use eyepiece. It is form of EP projection approach because NV devices are made to be used stand alone - much like consumer type point and shoot camera, it has integrated lens. I deliberately used above comparison, because I needed a mental image of how NV devices work in terms of optics, and it is analog to using EP projection technique with point and shoot camera (or smartphone camera). Diagram on wiki explains this very well: Device needs collimated beam on "input" because it has integrated focusing lens. Next stage is photoelectric device turning photons into electrons. These electrons hit electron multiplication device that sends stream of electrons hitting phosphorous screen that emits light. Light is then sent thru number of lenses that represent another "scope - ep" combination. What I'm wondering is if above device could be turned into simple eyepiece. Or maybe EP created using same principle as above. This could potentially bring down cost of EV equipment and with utilization of astro standards (like 1.25" / 2" barrel) - ease of use as well. Focusing lens at the front is not needed, because light beam from scope is already focused. This would also imply that eyepiece is not needed in front of EV eyepiece of such design. Does anyone have idea what is the size of photoelectric device used at front? That would need to be matched fully illuminated field of scope for best utilization - it's a bit like size of field stop with regular EP. There is also matter of "resolution" of the image, that needs to be matched with the eye angular resolution. Electron multiplication device actually works as if it has "pixels", here is diagram from wiki: This type of electron multiplier is used in Gen II and Gen III devices. Holes are 8um in size, but spaced at 10um. This effectively defines "pixel QE" and sampling resolution of EV device, when coupled with certain scope. Any ideas on this topic?
  9. I must confess that I don't really know distinction between analogue camera and a digital one, sensor wise. I do understand that analogue cameras transmit their signal in analogue form, while digital cameras transfer their signal in digital form (both use voltage levels but with different "protocol"), but is there any real difference in sensor being used? If not, then we can't really say that there is a distinction between the two in astronomy usage, since it's only different transport protocol being used - much like difference between USB and Firewire or maybe Ethernet used to gather signal from camera for further processing.
  10. Hm, that is interesting. It does, however, make me wonder how it's actually working then. Since it's amplifying light by conversion to electrons, if light beam is coming from eyepiece, then it's collimated and at an angle (amplified angle). NV device must preserve this angle in amplified light - this means that surface emitting light needs to know direction that it needs to emit light in. It can't just be a little screen with pixels - no human can focus at such a short distance. Does using NV device change focus position for given eyepiece? (vs same EP without NV device)
  11. I think people have some misconceptions about EAA and "processing". NV is like having analog sensor and old type cathode ray tube screen, small enough so that eyepiece is needed to view the image on it. EAA is same thing in digital form - sensor is digital, and display is large and TFT/LCD type instead of CRT. EAA offers better control. Processing is nothing more but enhancing what is already there - a bit like using filters in traditional observing. NV signal amplification is type of processing - linear processing. It even contains form of integration as phosphorus display shines for a bit longer - it accumulates multiple electron hits into single light output. There is nothing wrong with processing - after all, there is processing done in traditional observing - it's done by our brain. One never sees photon noise while observing. This is because our brain filters out this noise. Why do we see things better the longer we observe it? Our brain also uses "stacking" - or integration of the signal.
  12. No, not magic, physics! Been a bit researching on the whole NV topic as well as trying to figure out the root of all the sore feelings about NV being moved together with EAA into EEVA. I understand the sentiment that NV reports should be together with other observation reports, and I agree about exposure part - many more people will get in touch with this idea and technology that way. On the other hand, I do understand NV to be a part of EEVA and in my view EEVA is only to gain with such diversity. After all, I feel that anyone interested in EEVA accepts it not as alternative to "traditional" observation (if there is such a thing), but rather another tool in observation box, or in this case - bunch of tools, all relying on electronics to enhance viewing/observational experience. Just to touch up on NV side of things, and in particular comparison to live stacking / video astronomy. Same amount of light is entering objective of the scope, and no miracle is going to make it stronger - it's just about efficiently detecting that light and presenting it to the eye of the observer. Whether it is an analogue amplifier or digital one, I really see no difference. There is however difference to how one perceives observational experience, and in some sense NV is closer to traditional observing in this regard.
  13. I don't think this is necessarily so. There is a common theme to all branches of EAA, or EEVA what ever majority decides to call it. It is after all, Electronically Assisted - Astronomy. We can share experiences of being able to see deeper than would be possible without aid of electronic devices. It is still live activity, requiring observer to be interacting with equipment in order to see objects (either at eyepiece or at computer screen), and it is visual, although photographic evidence can be taken with all of diverse activities that we call EAA - be that picture with smartphone or image processed out of live stack session, or maybe screen shot from video feed. @PeterW Live stacking does not need be "delayed" - it can be live as well - stacking video at 30fps would certainly qualify as live feed, and it even does not need to be stacked - video astronomy just uses video cameras and feed to monitor for live view.
  14. I'm sorry to hear that some people feel that NV is somehow being brushed aside. Maybe it even is, I have no idea if that is the case and personally never had that impression, but then again I did not participate in much of NV discussion - just read couple of reviews and saw numerous images taken with smartphone - which I found to be really interesting indeed. On the other hand, I would like to point out that EAA and NV are in essence the same thing, unless I'm again misunderstanding how NV devices work - you still need a battery to get the image out of it, right? It does not do any sort of magical amplification of light, it has to be some sort of sensor and some sort of signal amplification and some sort of projection screen.
  15. I'm somewhat confused by all of this. Does "traditional EAA", either in form of video astronomy (old video cams at eyepiece) or live stacking type fall under EEVA definition? Also, @GavStar and @alanjgreen, this is a genuine question, not trying to provoke an argument or mock subject or whatever - Why do you feel that NV is more visual astronomy then for example live stacking? If I understand things correctly, both NV and Live stacking - use light sensing device, a form of amplification of signal and light emitting device to enable observer to look at object with their own eyes. Granted, NV device is self contained / compact one, while camera / cable / laptop with screen is not as compact, but both systems in essence provide the same thing. Also, experience will differ, but does the principle of operation outlined above differ as well?
  16. No, sensitivity does not play a part in that. Here we are talking about splitting one very large exposure into smaller ones. You can look at it that way - instead of doing 1 grand exposure lasting multiple hours, you break it into shorter ones and add up those shorter ones. All things that are important to final result add up with time - signal adds up, more you image more there is of it. Light pollution behaves the same - longer exposure you accumulate more light pollution. Thermal noise is also the same - more you image, more it builds up. Everything except the read noise - it is the same regardless of exposure length. 10 minute sub will have same read noise as 1 minute sub. And that is the difference - more subs you take, more times you add read noise. Level of read noise determines how much it will impact final result - lower the read noise, less impact it will have compared to other sources. Most often read noise is compared to LP noise - look above at triangle explanation, but other noise sources also participate. With narrowband imaging you eliminate most of LP - this is why you need to go longer in NB - read noise becomes important factor. ASI1600 has very low read noise compared to other sensors. Most CCDs and DSLRs have it at about 7-9e range, some low read noise models have it at 4-5e, but that is still x3 compared to ASI1600 (and other CMOS models). With DSLRs it's often recommended to go for ISO800 or similar instead of ISO100 - this is because CMOS sensors tend to have lower read noise at higher gain settings.
  17. There is no definite value that one should use here. It comes down to when you can consider read noise contribution too small to matter. Best way to "visualize" this is by right angle triangle. If one side is much smaller than other - longer side approaches hypotenuse in length. Total noise is hypotenuse and read noise and LP shot noise are sides of right angle triangle. When LP shot noise "side" becomes much larger than read noise "side" - total noise (hypotenuse) comes close to that LP shot noise "side". a = b implies c>b O<<A implies H and A almost equal in length (O here being read noise almost has no impact on total noise - it's dominated by LP noise - here A)
  18. If it's 21.3 - that would be considered fairly dark sky in terms of imaging. You would benefit from longer exposures then.
  19. Or when one is using very long focal length, high resolution (and maybe binning afterwards in software). Even in red bordering with white zone, I benefit from couple of minutes subs at 0.5"/px (this gets binned afterwards as it is oversampling). Skyglow also gets "diluted" over large number of pixels and LP noise is smaller in given time when imaging at high resolution.
  20. What is your light pollution like? Do you know average SQM reading for your site? (or maybe data from lightpollution.info)? I think numbers that you mention are good for average to strong LP (120s). Only if you have very dark skies it makes sense to go longer.
  21. That really depends on various factors. For any given setup and conditions, longer subs will always produce better results, but relationship is not straight forward. What happens is: going from 1m to 2m will have significant impact, going from 2m to 3m is going to have noticeable impact, going from 3m to 4m will be barely noticeable and going from 4m to 5m is not going to produce any perceivable difference. Above numbers are arbitrary and serve just to show you that there is no linear dependence between sub length and improvement - at some point improvement starts to rapidly fall of until it reaches undetectable levels (meaning you can't tell difference to SNR by eye alone and it has virtually 0 impact on image quality as perceived by human eye). Thing is - above numbers depend on many factors - focal length of scope, aperture, light pollution levels, etc ... With higher sampling resolution (less "/px) - increase sub length. With darker skies - increase sub length. With larger aperture increase sub length. When you have combination of those factors then it's not easy to say without calculations (like using lower sampling rate while moving to darker skies, or other combinations). As for lum vs RGB, that one is even harder to tell. I usually do it equally for each filter - meaning same time for L, R, G and B. Some people do it like equal time for L as for R, G and B combined and they split R, G and B equally. In theory, given limited time budget there is optimum split - but it is way hard to calculate and one would need much more information than is available prior to imaging (like target brightness in each band - you don't know that until you image target). One thing that can be useful is: don't be afraid of long exposures if your mount/scope system supports them (guiding and tracking are up to task). You can always get few short exposures at the end to blend in what ever you saturated in long exposures.
  22. Check if ND3 filter is properly installed in wedge before you use it. Easiest way to do it is to look down the wedge without it being attached to scope - point it to bright daylight and it should be very very dark. CPL or linear polarizer should be placed at eyepiece. Light from wedge is polarized and addition of another polarizing filter lets you adjust amount of light reaching your eye by rotating eyepiece. So if you screw CPL filter into eyepiece that you are going to use to observe the sun and put that eyepiece in wedge - by unclamping it and rotating it you can adjust brightness of your view for best contrast, then tighten EP clamp again and continue to observe.
  23. I think that can be easily solved with submerged pump - no way to get air in once you flush all air from tubing.
  24. I obviously did not read your post careful enough - you mentioned water cooling indeed. Most of camera coolers use very smooth running - electronics cooling fans, like ones for computers. This is because of noise and shake - as they are mounted on cameras. There are industrial grade fans that are designed to be resilient - for example ones used in kitchen extractor fans and such - can operate under steam and higher temperatures (but I'm sure you can find ones that work in lower temperatures as well). These usually have higher level of vibration and noise due to slack - design decision to prevent them from seizing in higher temps. Maybe you can use one of those - but it would involve quite a bit of DIY. There is also option to go for heat pipes - you can source them from aftermarket CPU coolers - can be used for both air and water cooling based solution.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.