Jump to content

vlaiv

Members
  • Posts

    13,251
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. Yes, exact workflow was: - Take original image and blur it with Airy pattern of certain aperture (and I included 25% central obstruction) - Resample each copy of original blurred image to certain sampling rate - Apply wavelets to each copy - Resize images to the same size for easier comparison
  2. Using threaded attachment certainly won't hurt, it can only improve stability. Do be careful about it - you will need rotator part as well, unless you feel comfortable rotating scope in the rings when you want to frame an object (some scopes have permanent attachment of dovetail bar so they require rotator to allow for camera orientation).
  3. Distortions to one side of the image could mean sensor tilt. It could be due to camera, but also because of way the camera is attached to the scope. Best way to assert this is to take a test shot - note how much distortion there is (which corner / side) and rotate camera. If same side / corner is affected - it is sensor tilt. If aberration moves accordingly (you rotate camera by 90 degrees and aberration does the same - usually in opposite direction) - it can be because of attachment. If it' due to attachment - you need to check if attachment is firm but tilted or if it is loose and camera sags under gravity. For this - take a test shot and then don't rotate camera but rotate scope - point to another part of the sky so that camera is oriented differently with respect to the ground (usually west and east is good direction to test this). If you determine that you have connection problem and it is loose - for example because you are using standard clamping by either screws or compression rings - think about upgrading to threaded connection if your focuser supports that - or maybe upgrade of focuser itself - if you are using any sort of extension - check if it is causing sagging under gravity (too loose attachment). If you have firm tilt - try to figure out what component is causing it - usually extension ring / tube and try replacing it. if you have sensor tilt - you need tilt adjuster since you are using DSLR and it does not have integrated tilt adjustment. Hope this helps.
  4. I think that these are very interesting filters, but I also think that most people don't use them "properly". Best way to utilize them in my view is in conjunction with regular narrow band filters to form sort of "LRGB" set for narrow band imaging (meaning mono camera). Such tri-band or quad-band filter would act as luminance - having higher total SNR as it would combine signal from all sources of interest in single frame (reducing impact of read noise which is significant in NB imaging). Single narrow band filters can then be used for much shorter as they provide only "color" information or rather separation of signal types. Here one can exploit the fact that targets have dominant emission(s) - usually it is Ha and one other (either SII or OIII) - this means that you can end up with shooting only those stronger lines and calculate remaining one (each strong component will have better SNR and there is a subtracting stacks is in fact additional stacking - odds are that third component will end up being better SNR then if shot directly with that filter).
  5. No, it's not about resolution / resolving power (yes, 16 inch will have small theoretical advantage even in poor seeing, but that is besides the point). Your post that I first answered to, stated that you can't see any advantage in going with larger scope. I offered distinct advantage of larger scope - it will be faster for given target sampling rate. If one is willing to sacrifice FOV (has reasonably large sensor for intended targets), will bin data anyway, then why not go for biggest aperture one can handle?
  6. I was just pointing out that larger scope means more aperture, and as such has benefits for target resolution - it will collect more photons and will be faster.
  7. Same difference Although "speed" of the scope stays the same, "speed" is not really the speed of gathering photons. Best to think of it as "aperture" at certain pixel scale. As you pointed out, both scopes will over sample on most cameras (pixel sizes) and there will definitively be some binning involved to get to proper scale. If we consider that, then both scopes can be matched with a camera and bin factor to produce same target sampling rate. 16" vs 12" at same target sampling rate - more photons in first case. Another important fact to consider with this type of reasoning is available FOV - longer FL will reduce FOV, so that should be taken into account when considering scope.
  8. Indeed! I'm in the market for ~ 5mm-7mm eyepieces for planetary and simply love my ES82 11mm, but at present time, given that ES82 LER would cost me total of about 185e (shipping, import duties, tax, you know, all the nice things government makes you pay ) - that is a bit steep for me. Looking into ES62 5.5mm though - that one I might be able to afford at this time, so if I pull the trigger - will certainly post a review
  9. There is a lot to be gained by using 16" F/4 scope (I'm guessing it is F/4 scope because you mentioned 1600mm FL). Take for example difference to my 8" RC. I use it with ASI1600 - it's oversampling at 0.5"/px, but binning sorts that out, and I usually consider my images to be between 1.5"/px and 1.0"/px (x3 or x2 bin). 16" F/4 scope will work with the same sampling rate as my 8" RC, but will have x4 light gathering capacity. I would not mind having a scope that will gather x4 more light and keep similar "properties" as my current imaging scope.
  10. Bump up ... Anyone has any news on these now? I see them stocked again at TS - they were in stock at some point at the beginning and were pulled out of stock - in all likelihood related to first faulty batch. They are back again, but I don't seem to see any further mention online of this "line" (or rather addition to existing line of EPs?).
  11. Haven't read the whole topic (will do tomorrow), but in my view both unity gain and gain in dB how most manufacturers use it is quite right way to do it. Gain in dB is relative measure - it needs a reference point, in same way that magnitudes (stellar) need reference point. Choosing unity gain to be reference point might make sense, but it is likely to confuse most people as we would end up with both positive and negative gain (in the same way we have positive and negative mags when using Vega as reference point). It makes more sense to keep gain positive and choose reference point to be in terms of other characteristics of sensor - quite reasonably manufacturers choose gain value at which ADC matches full well capacity. This is a good baseline value - 0dB gain in this case will let you gather as much light per pixel as sensor allows (is designed to do) - no "amplification", hence 0 value. Unity gain as term then makes complete sense - it is gain in dB units at which there is 1:1 correspondence (unity conversion factor) between e and ADU.
  12. Could be that ZWO is no longer accepting customer orders because they stopped production, and above batch was ordered by TS some time ago. Anyway, if you want that camera, I guess it's worth checking again after 9 days if it's in stock, or as suggested above - look at other vendors offerings.
  13. According to TS, it looks like they will be getting one batch in a few days? https://www.teleskop-express.de/shop/product_info.php/info/p8515_ASI178MMC-Cooled-Mono-CMOS-Camera---Chip-D-8-82-mm.html
  14. Well, there is potential use even in LP. These devices have quite a broad spectral response, up to 1000nm wavelengths. They "convert" all wavelengths in one particular - green display, or "broadband" type - white light. But this means that you can observe Ha/Hb/OIII/SII emission nebulae without restriction. You can use new multi pass NB filters - like duoband or triband, without worrying of human eye sensitivity in particular range - it "converts" all those wavelengths in part of the spectrum where our eyes are the most sensitive. For stellar/galaxy type targets, you can use NIR part of the spectrum with usage of NIR filters, otherwise only used for imaging, since eye can't detect wavelengths above 700nm on its own. Targets that will probably see the least benefit in LP would be reflection type nebulae. This agrees with recent Stu's report on using NV gear. But those price you found are seriously steep. I managed to find couple of sources of Image Intensifier Tubes via alibaba - a few Russian manufacturers, but again no prices were given. These are earlier generation devices - like Gen 2+ or Gen 3, but have good specs (~60lp/mm, SNR of about 20) and larger diameter like 37.5mm or even larger models - which is good for astronomy usage compared to standard 16 and 18mm, but again no prices listed and aimed at military usage.
  15. One of the reasons for this thread - could we find affordable gear to bring NV to masses?
  16. Just had my first NV moment Researching further, it's safe to say that CMOS + AMOLED technology is not there yet. Display side of things is lacking - it turns out that smart watch market is interesting one for small round displays and many are available today, but unfortunately resolution offered is too small. Most displays can be classified as 300-400dpi displays. Even highest resolution displays nowadays are still below 1000dpi, let alone 3600dpi needed. To test what would image be like with current displays - I took 32mm eyepiece, unscrewed the bottom and put it against smartphone display. I even loaded M51 mono image I took some time ago to have proper "feel" of it. It indeed works, image can be seen in sharp focus, but pixels are obvious. I remember someone using this technique to judge EP quality - actual pixels can be observed when EP is used against the display. It does however feel quite "real" in terms of observing. Given proper resolution, I would be really happy with such NV eyepiece - consisting of CMOS and proper AMOLED or similar display. Back on the original topic, @alanjgreen brought to my attention that it has been covered before, and indeed, this post is rather informative: It seems that Image Intensifier Tubes can be used as starting point for constructing true astronomical NV eyepiece / filter. These are "hearts" of NV devices currently used. I wonder if such items could be purchased in retail, and what would the price be. Manufacturer mentioned for EU market, indeed has these listed on their website, but currently only military application is envisioned and no retail of these is done. For phosphorous screens, adequate resolution is listed at 64lp/mm, which translates to ~3200dpi, so above calculation is quite good - we need at least 3000dpi displays to be able to project natural looking image.
  17. I think that you are quite right in growing EAA/NV potential. I'm not sure that AP practitioner base will shrink, people do get hooked to it, but given the cost and involvement, over time, many people wanting to start "taking pictures" of night sky will turn to EAA (live stacking) rather than full fledged AP. Both NV and Live stacking need simplification / cost reduction and user friendliness to move forward at speed. At the moment I think NV is hampered both by high price and also availability and legislative. Some countries prohibit use of these devices, because they are often associated with firearms - either in military or hunting role. Dedicated astro NV device would certainly be exempt from such laws?
  18. I see couple of "problems" that need to be addressed first with above approach. 1) Should it be eyepiece, or more flexible device like a "filter". Now that @Paul73 mentioned Daystar - much like a Quark / Quark combo. Problem that I'm seeing with this is that phosphorous screen is going to be very "spherical" light source, so not sure if regular eyepieces will be able to cope with that - we all know that EPs have trouble handling fast optics. On the other hand, having integrated EP optics at EP end would reduce flexibility of such device. 2) This is in essence analogue device, I'm wondering what would be QE of such device - given photoelectric element and of course electron multiplier. If we look at above diagram for one type of electron multiplier with holes - they seem to be rather sparse and overall "QE" can't be that high. I also wonder about "read noise" of such device, or rather electron multiplier thermal noise. Is it comparable to modern CMOS sensors? If we replace photoelectric element with CMOS sensor, and phosphorous screen with some other type of monochromatic screen that gives off light in green spectrum, and electron multiplier with electronic processing unit that scans CMOS sensors and produces output on screen - would it still be the same device? In principle it would. Maybe even more sensitive than NV. Probably bulkier? I wonder if there are screens with pixels small enough to produce natural image after "EP amplification" - I guess we can calculate this from human angular resolution and apparent field of view. Let's say we have something like 60 degrees AFOV, and human angular sensitivity is about 1'. That means that we need at least 3600 pixels across diagonal for image to look natural. Putting that in one inch field gives 3600ppi display - not sure such things exist at the moment.
  19. Due to recent participation in discussions on status of NV, I had opportunity to get to know a bit better what principle of operation NV devices utilize. Current state of affairs is to use ready made NV device and use eyepiece. It is form of EP projection approach because NV devices are made to be used stand alone - much like consumer type point and shoot camera, it has integrated lens. I deliberately used above comparison, because I needed a mental image of how NV devices work in terms of optics, and it is analog to using EP projection technique with point and shoot camera (or smartphone camera). Diagram on wiki explains this very well: Device needs collimated beam on "input" because it has integrated focusing lens. Next stage is photoelectric device turning photons into electrons. These electrons hit electron multiplication device that sends stream of electrons hitting phosphorous screen that emits light. Light is then sent thru number of lenses that represent another "scope - ep" combination. What I'm wondering is if above device could be turned into simple eyepiece. Or maybe EP created using same principle as above. This could potentially bring down cost of EV equipment and with utilization of astro standards (like 1.25" / 2" barrel) - ease of use as well. Focusing lens at the front is not needed, because light beam from scope is already focused. This would also imply that eyepiece is not needed in front of EV eyepiece of such design. Does anyone have idea what is the size of photoelectric device used at front? That would need to be matched fully illuminated field of scope for best utilization - it's a bit like size of field stop with regular EP. There is also matter of "resolution" of the image, that needs to be matched with the eye angular resolution. Electron multiplication device actually works as if it has "pixels", here is diagram from wiki: This type of electron multiplier is used in Gen II and Gen III devices. Holes are 8um in size, but spaced at 10um. This effectively defines "pixel QE" and sampling resolution of EV device, when coupled with certain scope. Any ideas on this topic?
  20. Hm, that is interesting. It does, however, make me wonder how it's actually working then. Since it's amplifying light by conversion to electrons, if light beam is coming from eyepiece, then it's collimated and at an angle (amplified angle). NV device must preserve this angle in amplified light - this means that surface emitting light needs to know direction that it needs to emit light in. It can't just be a little screen with pixels - no human can focus at such a short distance. Does using NV device change focus position for given eyepiece? (vs same EP without NV device)
  21. I think people have some misconceptions about EAA and "processing". NV is like having analog sensor and old type cathode ray tube screen, small enough so that eyepiece is needed to view the image on it. EAA is same thing in digital form - sensor is digital, and display is large and TFT/LCD type instead of CRT. EAA offers better control. Processing is nothing more but enhancing what is already there - a bit like using filters in traditional observing. NV signal amplification is type of processing - linear processing. It even contains form of integration as phosphorus display shines for a bit longer - it accumulates multiple electron hits into single light output. There is nothing wrong with processing - after all, there is processing done in traditional observing - it's done by our brain. One never sees photon noise while observing. This is because our brain filters out this noise. Why do we see things better the longer we observe it? Our brain also uses "stacking" - or integration of the signal.
  22. No, not magic, physics! Been a bit researching on the whole NV topic as well as trying to figure out the root of all the sore feelings about NV being moved together with EAA into EEVA. I understand the sentiment that NV reports should be together with other observation reports, and I agree about exposure part - many more people will get in touch with this idea and technology that way. On the other hand, I do understand NV to be a part of EEVA and in my view EEVA is only to gain with such diversity. After all, I feel that anyone interested in EEVA accepts it not as alternative to "traditional" observation (if there is such a thing), but rather another tool in observation box, or in this case - bunch of tools, all relying on electronics to enhance viewing/observational experience. Just to touch up on NV side of things, and in particular comparison to live stacking / video astronomy. Same amount of light is entering objective of the scope, and no miracle is going to make it stronger - it's just about efficiently detecting that light and presenting it to the eye of the observer. Whether it is an analogue amplifier or digital one, I really see no difference. There is however difference to how one perceives observational experience, and in some sense NV is closer to traditional observing in this regard.
  23. I don't think this is necessarily so. There is a common theme to all branches of EAA, or EEVA what ever majority decides to call it. It is after all, Electronically Assisted - Astronomy. We can share experiences of being able to see deeper than would be possible without aid of electronic devices. It is still live activity, requiring observer to be interacting with equipment in order to see objects (either at eyepiece or at computer screen), and it is visual, although photographic evidence can be taken with all of diverse activities that we call EAA - be that picture with smartphone or image processed out of live stack session, or maybe screen shot from video feed. @PeterW Live stacking does not need be "delayed" - it can be live as well - stacking video at 30fps would certainly qualify as live feed, and it even does not need to be stacked - video astronomy just uses video cameras and feed to monitor for live view.
  24. I'm sorry to hear that some people feel that NV is somehow being brushed aside. Maybe it even is, I have no idea if that is the case and personally never had that impression, but then again I did not participate in much of NV discussion - just read couple of reviews and saw numerous images taken with smartphone - which I found to be really interesting indeed. On the other hand, I would like to point out that EAA and NV are in essence the same thing, unless I'm again misunderstanding how NV devices work - you still need a battery to get the image out of it, right? It does not do any sort of magical amplification of light, it has to be some sort of sensor and some sort of signal amplification and some sort of projection screen.
  25. I'm somewhat confused by all of this. Does "traditional EAA", either in form of video astronomy (old video cams at eyepiece) or live stacking type fall under EEVA definition? Also, @GavStar and @alanjgreen, this is a genuine question, not trying to provoke an argument or mock subject or whatever - Why do you feel that NV is more visual astronomy then for example live stacking? If I understand things correctly, both NV and Live stacking - use light sensing device, a form of amplification of signal and light emitting device to enable observer to look at object with their own eyes. Granted, NV device is self contained / compact one, while camera / cable / laptop with screen is not as compact, but both systems in essence provide the same thing. Also, experience will differ, but does the principle of operation outlined above differ as well?
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.