Jump to content

SkySurveyBanner.jpg.21855908fce40597655603b6c9af720d.jpg

rickwayne

Members
  • Posts

    695
  • Joined

  • Last visited

Everything posted by rickwayne

  1. For any newbs to PHD reading this, the guideline (always assuming you're not using ST4 guiding) is to run your calibration near the meridian and equator, and run the Guiding Assistant at or near your target.
  2. Steve Richards' Making Every Photon Count is often heartily recommended on this forum. Myself, I am a huge fan of Charles Bracken's The Deep-Sky Imaging Primer, which I consider to be one of the best technical books I've ever read. He does just a superb job leading the reader through the very basics on up through the implications for tool choice and technique. He also includes chapters on processing, which is an oft-neglected part of the game when people are starting out. Many assume that what rolls off their telescope and camera is an almost-ready-to-view picture, but there is a LOT more to it than that. He includes a lot of good info on processing using either PixInsight or Photoshop; even if you're not using one of those programs, the general shape of what you have to do still applies. (And he's written a primer too for my favorite, Astro Pixel Processor.) Heck, it even includes a good list of targets to start with, at least in the Northern Hemisphere.
  3. Oh, and of course processing. While plenty of people (including me) have gotten great results with free software, I'm a real partisan for spending the money on Astro Pixel Processor when you're a beginner. It does not have as many knobs and buttons as PixInsight, which is still the class of the astro-processing world, but its great advantage is that it will do proper processing and give good to excellent results just with a few button clicks on the default settings. It also has a standout gradient/light-pollution elimination tool, which I use for Every. Single. Image. Just about everything that APP can do, other software can do too, especially if you're willing to spend the time with it. But APP's advantage is that it has all the tools that most of us need, and its workflow is simple enough that a beginner needn't struggle and thrash about. I have long, long since earned back my investment in the package, even if I only value my time at a few quid an hour.
  4. The refrain from the community is always "mount, mount mount". That makes a bigger difference for both learning and doing deep-sky than any other purchase, hands down. Once you have settled on that, I would first concentrate on fitting out your setup for smooth imaging without tons of manual intervention. You needn't be aiming for a completely automated rig but the name of the game is reducing or eliminating obstacles, especially as a beginner. If you struggle with e.g. polar alignment that will reduce your available imaging time (and increase your frustration). Likewise focusing. Likewise slewing to your target. So once you have a mount and your optics together, I would next focus on computer support. It's not necessary to image with a DSLR but it sure helps, eh? There are some good integrated packages, everybody has their own favorite but NINA (for Windows users) and KStars/Ekos come up a lot. I would look for a package that has all of: Focusing aid, whether just HFR or quantifying the output when you're using a Bahtinov mask Plate solving, to precisely determine exactly where the scope is pointed Polar alignment assistant; there are some that do a great job even when the Pole is not visible Plate-solving-assisted GOTO Sequencing of lights and flats Guiding It used to be that you had to juggle and coerce software packages to talk to each other but that's much less true today. I'm a big fan of StellarMate OS and the Raspberry Pi 4 as a scope-side computer, since that's easy to set up and does all those things right out of the box. If you buy a Pi, case, and StellarMate OS, you're out less than $150. Just my two cents there, everybody has their own favorite package. This will let you build up a standard workflow, there are so many little steps that it really helps to have a routine (a checklist isn't a bad idea either!). Part of that should be calibration frames, right from the get-go. You might not need dark frames with the Canon but you absolutely should compile a master bias frame and shoot flats (or shoot dark flats and flats, it's a horse apiece for a DSLR really).
  5. Plenty of challenging monochrome H-alpha targets out there! The very first image I shot with my mono camera, before I owned a wheel, is still one of my favorites. Wildly saturated bright-red Horseheads are a dime a dozen but a I really like how the black and white tones bring your attention to the complexity of the cloud wisps: https://www.astrobin.com/397555/. (OK, fine, I confess I'm aiming to shoot my own Very Colorful Horsehead someday.) The Elephant's Trunk is also fun: https://www.astrobin.com/48w7a8/.
  6. Actually with an 0.6X reducer you're more in the "slightly oversampled" region of 0.39"/pixel. With its teeny-tiny pixels, the 183's sweet spot is really more in widefield. The 294 will give you a bigger field of view and won't undersample too much with the short scope. Full disclosure, I got a 183 for a short scope precisely because I wanted finer-grained detail in my images, e.g. I "felt" undersampled with my DSLR, which had pixels about the same size as the 294's. I'm sure not going to attempt to dissuade you from turning to the Mono Side, I really enjoy it but like you, I revel in nerdliness. However if you can only afford either filters or a wheel at first, may I counsel you to get the filters? All the wheel does is make it simpler to use multiple filters in a session. Personally I've had a lot of fun doing Ha-only imaging, and you can always use successive nights to collect data at the other wavelengths (or just resign yourself to a lot of unscrewing).
  7. For artistic purposes, we often use software to pull the stars out of an image and process them separately -- for narrowband, white is about as good as you can get. A good trick is to take RGB data of the same target and composite those natural-color stars with the narrowband nebulosity data. Not sure what your goals are for the class, though. In this context, "calibration" usually refers to using catalog photometric data to adjust the image's overall color balance so that G2v stars (such as the Sun) come out white. And as Vlaiv points out, for narrowband data that just ain't gonna happen! But of course it can have other meanings.
  8. In general, you want to do as much processing as you can on linear data, that is, before stretching. One of the advantages to astro-specific processing programs such as PixInsight, Astro Pixel Processor, and Siril is that they can present a preview of what the data would look like stretched, but on still-linear data. I haven't used The GIMP in a long time but in Photoshop, you can simulate the same thing with a Curves layer on top of everything else, and then don't ever Stamp Visible or save the result unless the layer is switched off. For stretching in non-astro programs, my best advice is to patiently apply incremental stretches instead of doing it all in one go, and not to treat the whole range equally. What you want to do is bring up the dimmer areas but touch the brighter ones as little as possible. Your TIF is already pretty blown-out in the Trapezium area, so any stretch applied at all to that region will just give unrecoverable white. Levels are just a simplified version of curves, but curves give you finer control over that sort of thing. Best forty bucks you can spend in astrophotography: Charles Bracken's The Deep-Sky Imaging Primer. Steve Richards' Making Every Photon Count is also well-thought-of, though I've not read it myself. M42 is actually quite a challenging target for advanced imagers, primarily because of its extremely high dynamic range -- dim dust all around, but the bright Trapezium too. Nice job on the capture, there are a lot of things to like about these data!
  9. Oh, bravo! You're on your way now. One caveat: If the guidescope is mounted atop that big "stalk" designed for a finder scope, you may encounter issues with differential flexure. Since guiding involves detecting and correcting deviations of a fraction of an arcsecond, absolute rigidity is a big advantage. Mechanical slop which you can hardly even feel can translate to multiple arcseconds' worth of flex. The classic signature of that problem is for the guiding graph to be showing nice smooth low-error lines, and the stars in the imaging scope to be streaked, egg-shaped, or just extra-large. No sense in chasing the problem unless you see it happening, though! The stars in your Pleiades image look reasonably round to me, at least in the center (the corners do show some issues, which is probably not guiding). That is a great start!
  10. It also depends on where you image. If you work out in the field off batteries, that bumps the Pi way up in the rankings IMO, since even a pretty snazzy Pi 4 will pull 0.3A or less. I really like having a headless computer that will reliably sequence all night even if I burn up my laptop's battery and can't see what the Pi is doing anymore. It's also easy to attach to your scope, mount, or tripod so that you needn't worry about pulling a cable loose in the dark.
  11. I am usually a no-filters zealot but I have to agree here. Although since the L-Enhance passes OIII, the Moon is still going to have a noticeable effect. When I'm narrow-banding on a big-moon night, I time the sequence to capture OIII before moonrise or after moonset, and get my H-alpha during the bright time.
  12. You're probably going to be well within a CEM 40's happy range. For example my 70mm Stellarvue with a cooled astro cam, filter wheel, autofocus motor, and Raspberry Pi came to about 17 pounds (I think that was with the off-axis guider instead of a guidescope). That worked OK on a CEM 25P, a 40 would be laughing.
  13. A ways back in the thread, in the context of mounts, you noted that you would rather "buy once, cry once". This is an excellent attitude to have toward your first mount. It is possible to get a mount which is too heavy or too awkward to set up (assuming, like most of us, you can't do a permanent installation), but it's almost impossible to get one that is too good. My CEM70 works beautifully with a DSLR and a 50mm lens, and also with the longest scope available to me, a filter wheel, and a cooled astro camera. It is every bit as simple to set up and get running as my first mount was -- in fact, a lot easier, since it has through-mount cabling and much higher-quality adjustment mechanisms. The same can't be said of a scope. Tale as old as time: Beginning astrophotographer, logically enough, assumes that optical quality and focal length are the primary determinant of high-quality images, buys a long-reach scope with huge aperture. Surprise! Long reach and big aperture mean even bigger challenges for mount and for technique. Endless frustration, rage-quits the hobby. Blowing almost all of your initial budget on a mount and imaging with a DSLR and telephoto prime is an excellent way to start, actually (assuming you can work out an autoguiding rig).
  14. Some folks have had issues with poorly-soldered cables and the like, some assembly issues. If it comes up, slews where you point it, and the USB connections work, it's probably just fine. IIRC the camera in the iGuider will talk to INDI or ASCOM (community, check me if I'm wrong!), so to PHD2 it's just a regular guidescope and camera. The scope has only a 120mm focal length, the image scale is 6.44"/pixel according to iOptron. That's a bit on the coarse side, but it ought to work fine for for a lot of folks' scopes and cameras. One of the many rules of thumb you hear bandied about is that the guide system should have an image scale no more than ~5X that of the imaging system. My 336mm scope and IMX183 sensor yield about 3"/pixel, so it would work great with mine, I imagine.
  15. Pulse guiding has nothing to do with the type of motor, fortunately. It uses a connection (usually USB) to a mount's controller logic to emit pulses to tell the sidereal tracking circuitry to go faster or slower. So-called ST-4 guiding is an older (still viable) setup where the computer sends commands to the guide camera via USB, and the camera issues a signal over a cable connecting the camera to the mount. If you're going to use the computer for mount things like slewing the scope to its target, you already have a USB connection between computer and mount anyway, so you may as well just use that for guiding too. That connection lets the mount tell the computer where it's pointing, which allows the guide software to adjust its commands to compensate for declination -- the magnitude of corrections will be different for different parts of the sky. With ST-4 guiding, the computer is only talking to the guide camera so has no way of knowing where the mount is pointing. So you have to do a calibration run so that the computer knows how much correction to apply to make the image move by N pixels on the guide camera sensor. I'm not familiar with that mount so I don't know exactly how to connect which bits to what.
  16. If you're connecting to a laptop anyway, why not just use pulse guiding? Am I missing something? It's nice not to have to recalibrate every time you slew the scope.
  17. Nice. Bit harsh for my taste, but you do you, right? You're going to love the CEM70, if mine is any yardstick to judge by. Do exercise it thoroughly when you first get it, there have been quite a few QA/QC problems with them. Mine is great, no complaints. A cool dodge I figured out uses the threaded holes in the side of the saddle (probably intended for 3D balance weights). I cut a length of aluminum channel, padded a couple of hose clamps with tape, and bolted it to the side of the saddle. My guidescope went into the hose clamps, I tightened them down, and now I can autoguide a telescope or a DSLR without having to mount the guidescope to the optics. Of course, if you got a model with an iGuider, you're one jump ahead of me already. 🙂
  18. 183 owner here -- I would go with the newer, more capable camera. I love the 183, but it has hella amp glow (calibrates out perfectly, true), and a pretty small dynamic range. The 533 has much greater full-well capacity and I believe a 14-bit ADC as well.
  19. It depends a lot on your targets, your optics, and your budget. There are calculators online for figuring out the image scale for various cameras with a given scope. That's really the first determinant, along with field of view. See telescopius.com, among others, for previsualizing targets with various sensor sizes. As Vlaiv says, it's hard to beat consumer cameras economically. They offer their own challenges, particularly ones whose sensors produce enough thermal noise to require dark frames for calibration; this is why many astro cameras offer thermoelectric sensor cooling to a setpoint.
  20. "Right tool for the job" is of course correct, but "right" has a LOT of wiggle room to it. A lot of it is in intangibles. The Mac has decades of Steve Jobs' Reality Distortion Field making it cooler, they're good-looking machines with a lot of cachet so many are inclined to buy them for that, regardless of technical merit. (Disclaimer: I am, in fact, typing this on a $2500 MacBook Pro that I use for Windows software development.) Likewise, APP's intangibles are, to me, worth a fair bit. It just feels right to me, and I do a lot more than stack with it. A LOT more. If you're getting equivalent results out of free or cheap tools and feel good about the process, I salute you -- more power to you. Me, there's a reason I paid my €165 after using just about every free tool out there. I didn't try out a bunch of the less-expensive ones first, just figured that if I was going to have to pay, I'd go premium and not worry about it afterwards. So the choice for me was PI or APP. After using both APP was the clear winner for usability out of the box.
  21. I haven't seen anyone mention Astro Pixel Processor, which has a really straightforward workflow, and fewer knobs and levers than PixInsight. PI is still the class of the industry, but IMO APP is going to be hella easier to learn. Gradient/light-pollution removal in particular is simply amazing, and amazingly easy. It's also unsurpassed at doing mosaics. APP also has a free trial for a month.
  22. Yes, but there is certainly a focus issue going on, as ONIKKENEN notes.
  23. One technique I've been meaning to try is expediting color capture by interleaving R, G, and B more often. I usually shoot something around 1:1 luminance and total of R, G, and B (i.e. if 90' of luminance, 30' of each color). And like most people I do one at a time, refocusing for each filter. I've avoided switching filters too often because when the focus motor is running, I ain't collecting photons. But maybe good enough is good enough; Ekos lets me define focus settings from some filters as offsets from others. So autofocus L to a nicety, shoot a bunch, R, shoot a few, G, shoot a few, B, shoot a few, back to L...refocus every 30 minutes or 60 minutes. When the clouds roll in, I'll have collected a bunch of L and roughly R=G=B. If the luminance is tack-sharp it really doesn't matter if the chrominance is a little bit fuzzy. In fact I know some folks deliberately blur the color channels for noise reduction. Your thoughts? Still not as simple as OSC, but gets rid of one potential mono pitfall.
  24. Definitely a focus issue. I plate-solved the image and the eccentricity matches neither the RA nor the DEC axis, so it's probably not guiding (although if you have problems in more than one axis, it's possible to have non-axial ovals).
  25. I really don't mean to revive this pie-fight all over again, but since luminance gathers photons so much more efficiently than a Bayer matrix, and the human eye depends much more on luminance than chrominance to discern detail, for the skies most of us shoot in, an LRGB setup can achieve a given image quality in less imaging time, not more. Honest. If, say, clouds roll in, or you have equipment problems, then certainly mono imaging can leave you with unbalanced sets of subs for your channels. You have to manage the four channels' worth of subs. And run a filter wheel. And do the RGB compositing instead of letting a debayering algorithm do it for you. OSC sensors are more popular for other things, and so enjoy greater economies of scale and become available for astro earlier. These are all perfectly good reasons to go OSC, and I'm not claiming that such a rig is lesser, that people who use them have less mojo, or that mono is ineffably better in some fashion. But for most of us, mono is more time-efficient.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.