Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

rickwayne

Members
  • Posts

    695
  • Joined

  • Last visited

Posts posted by rickwayne

  1. Subs of different exposure lengths: Only if a single sub can't capture the dynamic range you want, which usually means blowing out stars in order to get some dim nebulosity. Although on a few targets like the Orion Nebula, there's so much dynamic range in the nebulosity itself that folks often resort to high-dynamic-range techniques.

    Sub length: So long as it's about right, total integration time will have a much greater effect on the final quality than tweaking sub-exposure time. And you can judge "about right" quite easily with a DSLR's back-panel histogram. Since for most targets, the commonest pixel value is "dark sky", just look for a big peak towards the left end of the scale. If the left edge of that peak is clear of the left edge of the histogram, you're probably OK; usually it's between 1/4 and 1/3 of the range. You can also have a look at the right side, if there are some values against the very right edge then you know you're blowing out stars. That just means that it's impossible to extract color from them; not always a deal-breaker.

    Jay6879, I know you were kidding but actually "aperture fever" is a disease of visual astronomers. Aperture matters much, much, much less for imaging, since we rarely approach Dawe's Limit for resolution anyway and for a given sensor size, it is the ratio of aperture to focal length, not the raw size of the objective, that determines the total integration time needed. (I think I've stated that carefully enough to avoid re-re-re-re-igniting the perennial aperture vs. f-ratio flamewar.) 

  2. Astro Pixel Processor. Straightforward, can be a one-button once you have your frames loaded or endlessly tweak params, good value for the money. PixInsight is the class of the field, famously not so simple to learn but once you got it, you got it. Both available for Macs and Linux, both have free trial downloads.

    I've used SiRiL, tried PI, used DSS, even done stacking in Photoshop, but always circle back to APP. Its gradient/light-pollution removal tool is terrific.

  3. Heartily concur that Richards or Bracken should be your first purchase. That will inform all the rest.

    Most people start with one-shot color before going mono/narrowband. It's not that hard, but there are certainly more moving parts, in both the literal and figurative senses of the term.

    Sampling rate is more commonly referred to as "imaging scale", and astronomy.tools has some nice calculators with cogent explanations of the issues. Really good for matching a cope to a camera.

  4. This was really a salvage job. I don't have to pixel-peep very hard to find problems with this image, but given what I had to work with, I'm pretty happy.

    I'm posting it here in Getting Started (also cross-posted to Cloudy Nights)  because I had to dig deep in my bag of tricks to get this, and perhaps one of those tricks might be useful. And also to show that a certain stubbornness can sometimes bear fruit. Mind you, starting with lots of nice clean data will do more for you image than any processing tricks ever can.

    This is two nights worth on the Leo Triplet, LRGB with a total of three hours integration time, heavily weighted towards luminance. I used quite a low gain on my camera for maximum dynamic range.

    First night was endless equipment problems, mostly self-inflicted -- I was running a star party for friends and was pretty distracted. I wound up imaging at a crazy-wrong backfocus from my flattener/reducer, and focusing manually. A bevy of other problems meant that I got the sequence started very, very late, so I only got in 42 minutes and forgot to take flats entirely!

    The second night went much better. Just me, and I'd sorted many of the tech problems, but as it turned out, not all of them. I also had clouds roll over, stopping me dead for a good 45 minutes. I had to throw away some of my Blue frames because the last ones featured trees instead of galaxies.

    Since the first night had no usable flats, I wound up just stacking the two nights separately and then stacking each channel's integrations. I might get less noisy results if I went back, started from scratch, and did them as two sessions in Astro Pixel processor. But due to the backfocus and other problems, I knew there was only so much I could wring out of these data anyway.

    Once I had done the preliminary processing of the individual channels, I had a lot of trouble getting a decent color balance in Astro Pixel Processor with either night's data. (Still working with them to understand why.) So, rather than struggle with its somewhat mysterious sliders, I used its terrific light-pollution reduction and stretching tools, exported TIFFs, and worked with them thenceforth. The LPR tool allowed me to cheat somewhat and fix some of the problems that the missing flats had caused.

    Opening all four channels in Photoshop, from Red, I did a simple Select All and copy. Then File | New to create a new image, changed the mode from grayscale to 32-bit RGB, and went to the Channels tab in the layers dialog. Pasted the copied data into the Red channel. Repeat for G and B, and some nice galaxy color appeared.

    Next, to exploit the Luminance layer, I converted the image to Lab Color, and copy/pasted the luminance in as the Lightness channel. Simple as that -- all of a sudden,  I now had a nice deep tonal range of plenty of details in the galaxies.

    (You can also use this trick to create a synthetic Luminance image for contrast enhancement or other tweaks -- just go to Image, Mode, and  select Lab Color. Changing back and forth from Lab to RGB is nondestructive.)

    Before I touched anything else, I exported to a TIFF and ran Topaz Denoise AI on that. Topaz recommends that you run Denoise as early as possible in processing. It did a very good job muting the griblies without giving anything a plastic feel or knocking out detail.

    Now the image needed some enhancement work on the galaxies, and also some saturation to bring out star color. For an image like this with few, distinct areas of nebulosity with little overlap with stars, masking would work fine. But I chose to go with star removal for a bit more control. So, I pulled the denoised image back into Photoshop, only to turn around and export an 8-bit TIFF for starnet++ to play with. Pulled the resulting starless layer into Photoshop yet one more time, set it back to 16 bit RGB, and set its blend mode to Difference, producing a stars-only image. Stamp Visible to turn that into its own, stars-only layer, which I then set aside by making it invisible.

    Turning the starless layer's blend mode back to Normal, I did some pixel editing with the Spot Healing Brush to remove some of the "star ghosts" that starnet can leave behind. A little bit of contrast enhancement; really, APP's stretch did all the work, I just tweaked a teeny bit. Then yet another TIFF export, this time for Topaz Sharpen AI. Usually I find the Motion Blur mode works best, but tonight the Unsharp mode  did a better job. I pulled the slider WAY over, and some really nice detail was extracted in the galaxies, especially in M66. Pull the result...you guessed it...back into Photoshop.

    Now, the stars layer. Add Saturation layer (option-click on a Mac, think it's alt-click on Windows, to assign its effects to the stars layer only). Pull the slider way over. Look at all them pretty colors! Whoa, look at all them ugly halation artifacts! (Told you there were some deep-rooted problems with these data.) Maybe not quite so much saturation...and out comes the Spot Healing Brush again, to clean up some of the real eyeball-pokers. Color balance layer (same option-click trick) to make the star colors a little more reasonable. (If I were going for accuracy instead of art, I'd use the star-color calibration tools in APP, or in PixInsight if I had that.) Add a Levels layer too to black out the background so it doesn't have a color cast.

    Lastly, set the blending mode of the stars layer to Screen, so that it combines additively with the starless layer below it.

     

    Hope this is helpful to someone. Not everybody, not even everybody who has Photoshop, owns Denoise and Sharpen AI, of course. Still, they're good investments, and of course the principles apply to other tools as well.

    Leo_Triplet_two_sets_LRGB-denoise-star-removal.jpg

    • Like 10
  5. Even those of us who recommend small refractors to beginners recognize that there are other valid paths -- or at least we ought to! It's just that such scopes tend to give adequate results with the least amount of trouble and fiddling. In fact I often recommend that folks start with DSLR/lens combos with which they're already familiar.

    If someone decided that they wanted to grind their own half-meter f/11 mirror, machine their own mount, and assemble The Newtonian Astrograph of The Gods for their very first imaging experience, more power to 'em, I say. But it's not the route I'd advise for most people. Many of us find more than sufficient challenge trying to get things working the easy way.

  6. I got to the park at 6:45, and set up for a little star party for my  friends  and some  imaging  on the side. The friend part went very nicely. The imaging...

    I'd neglected to daylight-focus my DSLR on the long refractor, so I had NO IDEA where to start autofocusing. In fact I couldn't find any stars manually focusing. GRRR!

    Fine.

    Pull the DSLR off the Meade, unscrew my cool 3D-printed thread adapter so I could put the 183 and filter wheel on it. Or tried to. Absolutely would not budge. So, that telescope's done for the night, and so's that camera.

    FINE. I am DETERMINED to GET A PICTURE tonight.

    Take the Stellarvue off its visual mount. Don't have the appropriate weights for that scope -- wasn't planning on using  it on the CEM70, and I already had had a full car with four telescopes, one of them the massive 11" Dob. Screw in the flattener/reducer. Oh wait, no spacers. Backfocus will be maybe 20mm instead of 55.

    FINE!

    Autofocus pulley and belt are on the Meade. It's now almost eleven (the park closes then). Grab the Bahtinov mask. Get focused. Plate-solve to slew to...what do you mean, plate solving failed?  Dink with plate solving for half an hour.

    FINE!

    Tell the mount to do the slew. No galaxies visible. Dink around with nudge-and-shoot for another half an hour. Now looking nervously over my shoulder for the park rangers. Historically they never show up to kick me out before Memorial Day, but you never know.

    FINE!!! Spend 15 minutes squinting down the tube, getting Regulus in the FOV. Sync the mount on Regulus. Tell it to slew to M65. What do you know, three galaxies! Check watch -- it's 0015. Still no ranger. Check focus again. Start guiding.  Fairly crappy -- only about 1" total RMS, which for this mount is pretty sad but for some unknown reason  I didn't feel like messing about trying to improve it this particular night. Start the sequence. I am just about falling over by now but by Bog I am LEAVING THIS PARK with a freakin' PICTURE, or  else I'm leaving  it WITHOUT ANY *****ING TELESCOPES!

    The good news is that I think I've figured out which Ekos option was crippling the  plate solving, which is the single biggest time-wasting obstacle I faced.

     Seven hours in the park for 42 minutes of integration time, and a couple of whacks at the result with Topaz Denoise and Sharpen AI that probably qualify as crimes against nature. Here's the Leo Triplet.

    LeoTriplet.jpg

    • Like 11
  7. To the original poster: Mirror size, or aperture generally, matters a lot for visual observing. Your eye can only integrate over fractions of a second, so you want to hit it with a flood of photons at once. Aperture also determines the theoretical size of the smallest detail a scope can resolve.

    For photography, we can literally take our time -- the sensors will patiently accumulate photons over long exposures, and more to the point the computers can integrate the photons received over many exposures. (A bucket of math there -- for now, just take my word for it, lots of exposures are good).

    Because the atmosphere wobbles and our mounts aren't perfect, no one ever achieves the theoretical resolution for a given aperture. And, as noted above, wider scopes tend to magnify more, amplifying those problems. So on the same mount, the photographer with a smaller, lighter scope just shrugs and adds more exposures, winding up  with a better final product than the big-scope guy.  Since smaller, lighter scopes are cheaper, for the same total budget she'll have a better mount and  be getting even better pictures still.

    Alacant is exceptional -- most beginners struggle more with a bigger, longer scope. Once you're learned the techniques, a little refractor can be limiting, but learning is not the same as doing.

    • Like 1
  8. Gonna put in a plug here for dedicated astro processing software. In the not-very-long run I suspect you will spend less time and encounter less frustration if you use a soup-to-nuts tool such as PixInsight, Astro Pixel Processor, or even SiRiL.  I use APP and the gradient removal and background calibration tools work incredibly well and are super-easy to use. Both programs have generous free-trial periods; APP is famous for balancing ease of use with its excellent feature set, PixInsight is still peerless for its features and power but is famously hard to master.

  9. Or you run One Program to Rule Them All, he snarked KStarishly. (That's the package  I  use and am always banging on about, it has its own guiding module but can also work with PHD.)

    As teoria notes, it's helpful if the software you're using to run the camera either includes its own guiding software or can talk to PHD2. There's a technique called dithering which moves the scope a few pixels' worth between exposures, so that the same bit of sky isn't always walloping on the same pixel -- very helpful for reducing certain kinds of pattern noise. Obviously you want that to happen between exposures, not during them!

    Any laptop sold today has ample processing power to run acquisition software and guiding at once. In fact many of us do exactly that on a Raspberry Pi.

    • Like 2
  10. It does look a bit soft to my eye, I  don't see doughnuts but I do wonder about the focus. Are you using a quantitative metric for that?

    Just for kicks and giggles, I also  took a run at the  JPEG, using Topaz Sharpen AI and  pulling the  slider over till it screamed "please don't  hurt  me". It did yield a bit more detail, at the cost of my burden of guilt.

    1123420529_Starsize.JPG.1b5232ee459398ae5d190efcd67915e0-SAI-motion.jpeg

    • Like 1
  11. You might be interested in the ASIAir Pro then. Much more expensive than snapping together your own Pi, but you get a nice metal case, a 12V power input and a couple of 12V outlets, and a carefully curated, simplified app experience instead of the extensive, feature-laden desktop-computer interface of Ekos. (StellarMate, as I said, also provides an app to streamline use of their stuff).

    The AAP limits you to the ZWO ecosystem, although they also support Nikon and Canon DSLRs. I like ZWO equipment, but have a Pentax DSLR too, so I'd be leery.

  12. No  one who  frequents this  forum  will be  shocked that I  endorse  the astro-Pi idea. Advantages:

    • Cheap
    • Can be an extremely capable observatory-control system
    • Small, light weight, very low power consumption
    • Versatile

    KStars/Ekos has everything from planning (including automated generation of mosaic jobs) to control of just about every astro device you can think  of. The Pi is easy to run off mains, but will last all night with even a pretty inexpensive battery powering it and your mount. For my CEM25P, I used to use a 14 Ah deep cycle sealed lead-acid. Lasted WAY longer than the battery in any laptop I've ever had!

    If you spend US$50 on the turnkey StellarMate OS software, you can use a dedicated app on a mobile device at your scope to set up, then retire inside to run everything with your regular computer. Or you  could spend maybe $120 on a touchscreen HDMI  display and a Bluetooth keyboard and use that at the scope.

    The Pi stands up  a short-range WiFi hotspot if it can't sign in to a local WiFi network, or you can run an Ethernet cable (I use a 30m one) out to the scope. If the network goes down, or your computer  goes to sleep, the Pi doesn't care, it just keeps on  imaging.

    The versatility really appeals to me -- I can run KStars/Ekos on my MacBook or Windows laptop and plug directly into the equipment for testing, or set up the Pi and remote into it with a laptop, tablet, or  even a phone  and have exactly the same interface at a remote site.

     

    • Like 1
  13. Contrast is a little high for my taste, but that's more of an artistic decision than a technical one. Since you're putting your toe in the narrow waters, please allow me to commend to you the pleasures of single-band imaging, too. Just like as with black-and-white terrestrial photography, there's a whole world hidden in shades and tones that colors can overwhelm.

    I won't post my first NB image here -- this is your thread, after all! -- but I will link to it as an example to show that even within a simpler domain, there's still a lot to learn. Like you, I go "Aaah! AAAAAAA!" when I see this image now.

  14. Well, of course. It's a very deep discipline (ow).

    May I recommend my personal bible on it? The Deep-Sky Imaging Primer by  Charles Bracken is an eminently readable and fascinating...I was going to say "introduction",  which it certainly is, but his gift is that, while requiring no prior knowledge, he pulls you along so effortlessly that you're amazingly well-grounded before you even realize it. I was reading away on various fora and it was only after I read the book that I realized I'd been a victim of Rumsfeldian "unknown unknowns" -- the bits and pieces you pick up  here and on other online fora are no substitute for a well-rounded, well-planned pedagogy.

    Steve Richards's Making Every Photon Count also comes highly recommended, though I haven't read that one.

    • Thanks 1
  15. I think your focus is pretty good -- look at the smaller stars, hardly any doughnuts. Likewise your tracking is good enough  to pull out good detail in M82, even if it isn't technically perfect -- I see what they mean about "multiple" stars, in fact I see three lobes to some of the brighter ones.

    You may be able to get better star color out of this -- or not. Check out your stacked but unstretched image with a tool that allows you to read pixel values (I don't recall but I think SiRiL will do it). Are the stars saturated before stretching? Are the RGB values different enough for some of them to indicate color? If not, they're just overexposed and that's pretty much that for this try. But if you do have color lurking in there, you can use a star-removal tool to produce a "stars-only" image and a "no stars" image, stretch and saturate them separately,  and then recombine them (Photoshop "screen"  blending mode works well for me, not sure what the equiv is in The GIMP).

    I use starnet++ for this, it's a neural-net tool that the author trained to pull stars out of images. My workflow is to run starnet on the image, use "difference" blending mode in Photoshop on the original and the starless image to produce a stars-only layer, then save that and process the starless and stars images separately. You  can run  an aggressive stretch on the nebulosity to bring out details, and use a much lighter touch on stars, and whomp the saturation to egregious levels as well to bring  out the little twinkly  jewel-tones.

    Of  course, on  an image like this, where the nebulosity to protect is such a concentrated and defined area, you could just use masking to apply different stretches and saturation to different parts of the image. For something where nebulosity is most of the image and it's speckled with stars all over, that way madness lies. 🙂

    Really nice result for a first shot. I mean, I am super-jealous here.

    • Like 1
    • Thanks 1
  16. I am another who uses the slip-the-belt off option (though I have mine on the coarse focus knob). I have two DIY setups where a bit of steel strapping holds a stepper motor above the focuser knob, with a toothed timing belt connecting them that's easy to slip off. Too easy -- occasionally it will fall off during an imaging session!

    BTW a 3mm HTD belt fits PERFECTLY on my Stellarvue knobs, and might on others too.

    • Like 1
  17. I actually started building a myFocuserPro, Arduino and all, but I could never get it to move the stepper. Dunno if I soldered something wrong or what.

    The huge advantage to the Pi/Waveshare route is that it's so simple. You stack the HAT board on top of the Pi. You connect the 4 wires from the stepper to the HAT board (it literally has 3 different ways to plug it into the board). You get Kevin Ross's indi_wmh_focuser driver. You start focusing.

    I used a couple of RJ-11 breakout boards with screw terminals so that I wouldn't have to solder anything, and so that the cable between Pi and focuser is light, secure, and absurdly easy to obtain. The mount for the focuser...it is to laugh. Piece of steel strapping with holes in that we had lying around. I drilled more holes, the motor was already tapped for M2.5 bolts.

    The timing-belt pulley goes on the motor shaft, and the belt around the pulley and the coarse-focus knob.

    SV70-focuser.jpg

    • Like 2
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.