Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

Steve in Boulder

Members
  • Posts

    94
  • Joined

  • Last visited

Everything posted by Steve in Boulder

  1. Next, a set of Abell PNs, HOO view or Ha/mono view.
  2. [Note; I posted a similar report over at CN. This report owes itself almost entirely to Jocular, so I thought I'd post it over on Jocular's home turf. Please let me know if you like it, or if I should keep this stuff over on the other side of the pond!] Jocular has increasingly excellent support for using filters with mono cameras. I had my first extended session with version 0.6 last night and decided to concentrate on PNs. They typically emit in H-alpha and OIII wavelengths, so narrowband filters are suitable, especially with a bright moon (though OIII is somewhat affected). Jocular now can create a synthetic luminance (L) layer from narrowband (as well as RGB) data, so I decided to just take Ha and OIII subs, all 120 seconds. They can be displayed in HOO palette, that is, Ha data mapped to red, and OIII data mapped to blue and green. I posted an example with M27 in the Jocular EEVA Equipment thread. My setup is a garden variety Celestron C8, with the stock 0.63x focal reducer, on a CEM40 equatorial mount. I use a ZWO electronic filter wheel with ZWO LRGBSHO filters, a ZWO EAF, and a ZWO ASI533MM, this time running at gain 300, cooled to 5 degrees. Guiding with a Williams Optics Uniguide (30 mm aperture, 120 mm focal length), ZWO ASI120MM mini, and PhD2. (Yes, even the Uniguide is red to match all the ZWO stuff.) NINA controls everything, including me sort of: I used the NINA Sky Atlas to pick PNs above 40 degrees in altitude, at least 30 degrees from the moon, sorted in descending size. I figured the larger the PN, the better chance of seeing some details. To start things off, here's an example with NGC 7008. The first image is HOO palette, the second shows the strong OIII signal, and the third the weaker Ha signal.
  3. I believe you're right about SharpCap, though I haven't used it. The stars in the Veil image are a little elongated. That could be your mount, or it could be that the Veil was near the zenith. Images with an alt-az mount will start to have star trails due to field rotation, especially near the zenith. I try to keep away from the zenith, and I typically do 20 second subs with a narrowband filter when I'm using my alt-az mount (AZ Mount Pro). With the UHC filter, just try letting the stacking running longer, say 5 minutes total. That'll reduce noise.
  4. Well done! If you can get platesolving working, then centering DSOs is a piece of cake. What software are you using for stacking? How long were your exposures for the Veil image?
  5. M33 and M27 are excellent targets! You might also try M57 (and M27) with your Altair. Platesolving can help you locate targets with the smaller sensor.
  6. If you want a more instantaneous result, you might look into something like the Revolution Imager, though I have no experience with it. The Veil Nebula is pretty faint. If you did a really aggressive histogram stretch, it might show up, but I find that a narrowband filter gives the best result.
  7. That M31 is what one aims for with EEVA. Live stacking doesn’t mean instantaneous or near-instantaneous real time views. It just means you stack the images as they come in, instead of what the astrophotography people do (i.e. wake up the next morning and spend hours tweaking the hours of data they collected the night before). I typically collect exposures for 5-10 minutes.
  8. Your M31 and M45 with the Canon are pretty good! It's going to take a few minutes for this type of view. This isn't "video astronomy." On M31 with the Altair cam. The scope wasn't pointing at M31. It did have M110 in the field. I could tell this by submitting the image to astrometry.net: https://nova.astrometry.net/user_images/6245893#annotated That you can't see M110 (and some of the extended glow of M31) well is due to the black point being set too high. The glow in the upper left might be amp glow, not a DSO. You might want to save your sub exposures so you can test different settings with them later.
  9. It's possible now to ignore L subs by deselecting them in the subs table, but that's more work and less immediate feedback. Here's a comparison using a different type of DSO: M16 with 15 second subs. First is RGB + synthetic L, second is RGB + real L.
  10. So you're saying that the AP technique of using Starnet++ to remove stars and preserve their RGB colors prior to DSO processing can be left as an exercise for the reader? 😀
  11. And speaking of open clusters, here's another experiment with the newest version. For some objects, it may work well to skip the L subs and just use synthetic L from RGB data. Here's M11 with just 2 each of 10-second R, G, and B subs. The colors Jocular produces are glorious!
  12. Thanks, Rob! The LRGB mode produces really beautiful results with open clusters and other dense star fields, which would be well suited to to your scope. Targets like M33 might show nicely too. So hang in there!
  13. I've started playing with the multispectral features in this new version (0.5.6dev3), using some data from a few weeks ago. Here's a taste of what's possible with narrowband filters. All subs are 60 seconds, taken with a nearly full moon. C8, 0.63x reducer, ZWO EFW and LRGBSHO filters, ASI533MM. M16 is 6 Ha subs and 3 each of SII and OIII, shown as LHOS (i.e. Ha = red, OIII = green, SII = blue). The luminance (L) layer is synthesized from the narrowband data. M27 is 6 Ha and 6 OIII subs, shown as LHOO (Ha = red, OIII = green and blue), with L again synthesized. You can start to make out the faint outer OIII shell, as well as the inner OII loops and Ha details.
  14. Assuming you activated LRGB by selecting it on the lower right, Jocular would then recognize L, R, G, or B subs in one of two ways. First, and preferably, the Filter attribute in the FITS header would be set appropriately. Second, Jocular will recognize the characters L, R, G, or B in the file name. I’d guess that Starlight Live is doing neither of these when you take subs. You could experiment by manually prepending, say “L_”, “R_”, and so on as appropriate and then feeding the files to Jocular again.
  15. First, that YouTube has been discussed in two long threads on Cloudy Nights. While I haven’t tried the setup recommended in the Youtube myself, I’d say based on those threads that it hasn’t been uniformly smooth sailing for those who have. I started with the ASIAir Pro for EAA. The associated iOS (and Android) app has a simple live stacking feature. It has histogram adjustments and allows you to use calibration masters. I still use it sometimes when I want to keep things simple. I was able to get my first EAA view (M57, no surprise) after only an hour of futzing about. So it’s a good way to start. But the live stacking feature of the ASIAir app is very limited. So I moved to using Jocular for live stacking on my laptop. As the ASIAir is still pretty good at managing all the other tasks for EAA - controlling the mount, plate solving, taking images, managing electronic focusers and filter wheels, etc. - I kept using it for that. I run a simple script from a terminal to transfer image files from the ASIAir to the laptop, into Jocular’s watched directory. This works pretty well, though using the script is a little bit clunky. The other disadvantage with the ASIAir is you’re limited to ZWO products. Lately I’ve been using an inexpensive mini PC at the mount. I run NINA (plus ASTAP and PhD2) on the mini PC to manage all the EAA tasks other than live stacking, and I run Jocular on the mini PC. I use Microsoft Remote Desktop on my laptop to interface with NINA and Jocular. All three of these setups were remote, operating over WiFi. So I could sit inside (except when doing tasks like polar alignment) and watch the live stacks, well, come to life. Or I can sit outside under the stars if it’s not too cold and the mosquitoes aren’t too bad. But you don’t need to do any of this to get started. You can just take your laptop outside and hook it up to the mount and the camera.
  16. If you have a suitable laptop, you could consider continuing to use Astroberry on the RPi to find targets, take images, and so on, but using software like Jocular or SharpCap on the laptop to stack the images via a watched folder and a WiFi connection to the RPi.
  17. I have a 178MM. It has a lot of amp glow so you’ll want darks but otherwise it’s pretty good and definitely a big improvement over the 120MM. I had it out the other night on my C6 Hyperstar, here’s a portion of NGC 7000. From what I read, a good planetary camera too though I haven’t tried it in that capacity yet. Are you using ASTAP for live stacking?
  18. Low read noise cameras are very good for EAA, and for alt-az mounts, because you can take shorter subs and more of them without read noise becoming a significant factor. From ZWO, the 290MM, 533MM/MC, and 294MM/MC are all worth consideration.
  19. Excellent! When you're back at it, try out the different stretching algorithms with your subs. For example, try hyper stretch for the Cocoon and log stretch for M15.
  20. Also, just to check. When you start up Jocular and before you run a plate-solve, does it say "available" in the lower right hand corner, below "platesolver"? If not, perhaps you put the platesolving catalog (pardon my Americanese) in the "catalogues" subdirectory instead of a separate "platesolving" subdirectory?
  21. Very nice example, Mike! I enjoy gradually increasing the depth of the annotation and seeing what magnitude PGC galaxies and quasars I was able to catch.
  22. I'm not sure what the focal length and pixel size were for Jocular's included Arp 186 images. It's important to get those fairly close. It looks like you're running your APO with a reducer, perhaps you need to enter the reduced focal length?
  23. The platesolver needs a little bit more info: enter the primary DSO name via the DSO button. Then click “solve.”
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.