Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

rickwayne

Members
  • Posts

    695
  • Joined

  • Last visited

Everything posted by rickwayne

  1. In principle video stacking can be used for deep sky objects, but since they're so dim, hours of integration time are usually required. But that doesn't mean you can't get results that please you.
  2. Zoom lenses are in general not well suited to astro -- too many compromises in their design, and slower than comparably priced primes. (Yes yes, #notallzooms. In general.) I've seen some great work done with inexpensive old manual-everything prime lenses. While an alt-az mount can track, it will display field rotation in longer exposures. Just the nature of the beast.
  3. Platesolving also enables enhanced polar alignment. I use Ekos, but certainly other s/w exists for it.
  4. There are also several deep sky objects that don't need a ton of magnification. In the winter the Orion Molecular Cloud Complex can be imaged with a 50mm; the North America and Pelican Nebulae are also pretty big (one of Carole's shots is of the NA). The Andromeda Galaxy won't fill the frame by any means at 135mm, but it will certainly dominate the image. The big advantage of compositing short exposures together for star trails is that no single event can ruin your image. E.g. kicking the tripod, vehicle driving by with headlights, firefly landing on the lens...if you have one super-long exposure, any of those can tank it. If you're putting together short ones, you shrug and throw that one away. I use Photoshop and a "Maximum" stacking mode on Smart Objects, for what that's worth. You do have to be careful about the downtime between exposures, if you look carefully at this image the trails look sort of "dotted":
  5. Astro Pixel Processor for: Calibration (darks, flats, bias, bad-pixel map) Stacking Light pollution removal RGB compositing (I shoot narrowband or RGB) Initial stretching Starnet++ for star removal Photoshop for: Localized contrast enhancement Cropping, rotating Final color adjustments Denoise (Topaz DeNoise AI) Sharpening Re-compositing starless and star layer I used Siril until I got APP. No comparison in ease of learning and ease of use; I think I'm getting much better results out of APP with far less fooling around. Light pollution removal all by itself is worth the candle. Note that APP and Starnet++ run on Linux.
  6. Definitely go with the S model. It's been awhile but I'm not at all sure the USB problems were limited to Linux. And you may as well have the faster download available -- who knows, you might be imaging planets one day and want to shoot video. I don't have a basis for comparison since I've only ever had one guide camera, but the MC seems to do fine. I've had a couple nights where I was getting 0.6" RMS on my CEM-25P with it, though habitually I see more like 1.2".
  7. The Orion Molecular Cloud Complex is chock-full of fun stuff that you can make out with quite short lenses. This is hardly APOD, but was done with a 50mm in February: Barnard's Loop, Orion, Horsehead, Flame Nebulae
  8. I would heartily recommend developing and using a checklist if you're going to be out of your customary context. Super-easy to forget something! lonelyspeck.com has a series of articles on Milky Way imaging for the novice, that's how I started. If your camera is ISO-invariant over a range, favor lower ISOs to maximize dynamic range. And of course test when you get there, if the skyfog peak of the histogram doesn't touch the left edge then that's the right exposure. Sounds like a real winner -- enjoy! swagastro.com if you want to see what the imaging possibilities are in Spain. >;-}
  9. This is very common advice, but here goes: While you can pick up individual bits of information from videos or forum threads, IMO the very best way to start is with a good general-purpose book like The Deep-Sky Imaging Primer or Making Every Photon Count. That will give you a solid grounding in the whys and wherefores, the problems that the particular techniques or bits of equipment are intended to solve. TDSIP is how I got my start, and I still refer to it fairly often. You don't have to be a physicist or engineer to understand this stuff well enough to do great images, but basic knowledge really, really helps. For example, planetary imagers often shoot video and use software that assembles the best bits out of hundreds or thousands of frames, while deep-sky guys and gals do multiple long exposures and use all of every frame. These books will explain why.
  10. I thought about starting a new forum "Imaging For Four Years But Still Mostly A Moron At It" but didn't think anyone else would join.
  11. I haven't had a successful imaging session since <checks catalog> January. Sheesh! This bicolor West Veil is with a Stellarvue SV70t-IS, ASI 183MM-Pro, ZWO H-alpha and OIII filters. 20x180" on each. Gotta love narrowband -- lights around this athletic field were so bright I hardly needed a flashlight for anything. Astro Pixel Processor, then Photoshop for a little tweaking, denoising, and sharpening. I may have pulled on the saturation slider a tad 🙂 Full tech deets at astrobin.
  12. One reason I chose the 183 when I had Christmas money to spend was that its sensor is so much smaller than my Pentax. So now I have the option of widefield with the DSLR, or somewhat closer-up with the 1"-sensor camera. That said, I don't think I've hung the Pentax on the scope since I got the 183 two years ago!
  13. Oh, I forgot to mention that the North America Nebula emits light primarily at the hydrogen-alpha wavelength, unlike Orion or Andromeda. H-alpha is a deep red, and the IR cut filters necessary for general-purpose cameras usually are broad enough to significantly affect H-alpha light. When you read that someone had their DSLR "astro modded", that's what they mean -- removing the IR cut filter. So the NA Neb is going to be dimmer to start with for you. Orion has a lot of other wavelengths going on, and shines by reflected starlight as well as by emission. Andromeda, of course, is a broadband source. To be clearer, when folks post stunning images of small objects with small scopes, they are usually done with hours and hours of total integration time -- hundreds of sub-exposures stacked. They use high-end mounts with autoguiding to enable long exposures, if their light-pollution levels permit.
  14. I second that -- total integration time is your friend for deep sky objects. In heavy light pollution, you may need to string together several nights' imaging on one object to get the results you desire. More aperture per se will not help -- it is the focal ratio, not the width of the objective, that matters for light-gathering. That's not intuitively obvious but if you imagine two 130mm mirrors, one with a focal length of 500mm and one with a focal length of 2600mm, the former is curvier and is concentrating all its light on a small area, whilst the latter is flatter so the light is spread out more, hence dimmer. As an objective gets bigger, its curvature must be more and more extreme to achieve the same focal length, and it's harder and harder to avoid crippling levels of aberration. (In fact it's harder and harder to achieve the same focal ratio.) F/5 is already reasonably fast. PixInsight is the "big gun" of astro image processing; personally I favor Astro Pixel Processor, which is a tad cheaper and a ton easier to use, so I recommend it to people early on their learning curve. Chances are really good that the images you see of smaller objects with short-focal-length optics are captured using a dedicated astro camera. Those tend -- tend -- to have smaller sensors with finer pixel pitches, so when you blow up the image obtained with one you are magnifying a smaller swathe of the sky. The attached image isn't the best comparison I've seen, but it was easy to find :-). For example, my 16 megapixel APS-C Pentax DSLR would frame the moon in the example below with plenty of room around it. My 20-megapixel 1" ASI 183MM would crop it much more tightly, and when you blew the two images up to the same pixels-per-inch level, the features on the moon would be much, much larger with the 183. If you haven't already picked up a copy of Making Every Photon Count or The Deep-Sky Imaging Primer, I highly recommend that you do so. The latter was really transformational for me.
  15. Pentaxes have pretty bright viewfinders and I was never able to focus successfully with the optical finder. Not once. Just not enough photons.
  16. Really it depends on your tolerance for out-of-focus stars at the edges. The 183's sensor is smaller, so the stars at its edges will be less out of focus than the ones at the Canon's. Up to you whether it's "good enough". (I certainly still use one after going from APS-C to a 183.) However you might consider using a flattener/reducer, if you aren't already, and want a similar FOV to before. That would concentrate more light on the smaller sensor, shorter/fewer subs or better S/N, again your choice.
  17. Also have a 120MC; if I don't hear from you first I'll try it and report back. Easiest quantitative test would just be the S/N reported by PHD2, if you're using that.
  18. Only way to be sure. With a DSLR, and the short exposures one would use for flats, there shouldn't be enough dark current to matter -- bias should be all you need. But reality trumps pontification any day.
  19. Second that. Deep sky is a lot harder than you would think; those long exposures mean that any imperfections in the mount wind up very visible in the images. So the classic advice is to go short and fast on the optical train, which means shorter exposures and less magnification of errors, and then plow the money saved into the best mount you can afford. Fast prime camera lenses are a GREAT way to get started, there are plenty of targets which are good at 200mm or less.
  20. If you're not using a flattener or reducer with a fixed backfocus...I seem to recall there was a 1.25" nosepiece in my 183MM's box which would screw into the 11mm spacer (also in the box; it was already attached to the camera when I got mine). I guess yours has a 21mm instead. If you have the flattener, then as jimjam says you have to use the right set of spacers that everything adds up to the advertised backfocus distance. It looks as if the WO flattener has 48mm threads.
  21. Or, absent a white card, you can just hold the camera behind the telescope and move it back and forth. Don't ding the sensor into the back of the telescope, though -- I can tell you it's hard to look at all the things at once!
  22. APP is 150€. It by no means has all of PixInsight's features -- yet -- but I certainly am not limited by it. Straightforward and easy to use, has great defaults and "automatic" settings for most of us. Really good gradient/light-pollution reduction tool.
  23. Likewise re guiding with a color camera. I grabbed a 120MC because it was the cheapest thing I could find, and because I thought I might use it for comets or planets or something someday. The 120MM Mini is probably one of the most popular guide cameras; small, light, mono.
  24. INDI, KStars, and Ekos? All open source, lots of cameras including just about every DLSR you could want, actively supported, runs on everything from a Raspberry Pi on up to Windows and Mac OS, has just about every feature you could want. Planetarium program, observatory/telescope control, acquisition, guiding, polar alignment... I used Siril for awhile before deciding to spend some money on Astro Pixel Processor. I would occasionally have issues with getting color to come out right but in the main it worked pretty well. I was not a huge fan of its file-based progression -- every step, from conversion to FITS to calibration to registration to integration makes an entirely new set of FITS files -- but it's hardly the worst thing. (For example, if you decide you want to back up and try a different option, you don't have to go all the way back to the beginning.) It offers good choices for calibration and integration algorithms. Overall it's not the easiest thing for a beginner to learn (that would definitely be Astro Pixel Processor!) but if you read and watch the tutorials and keep at it, Siril will definitely do the linear-phase processing job for you and has tools for the nonlinear phase as well. Usually, though, I wound up doing that stuff in an image editor.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.