Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

rickwayne

Members
  • Posts

    695
  • Joined

  • Last visited

Everything posted by rickwayne

  1. Certainly people have done outstanding images with zooms. But you don't get something for nothing, and optical design, like in any discipline, is all about compromises. A general-purpose camera lens, especially a complex optomechanical one, has to play off all manner of constraints against each other. By comparison, a telescope has a very simple job: Provide sharp, contrasty, aberration-free images at a single focal length, at a single distance (infinity). One of the major advantages to a camera lens is of course that it's FAR more useful for terrestrial photography too. And your point about focusing is well-taken. An astro-suitable telescope will have a geared-down fine focuser operating at a 10:1 ratio, because there is no depth of field and stars are incredibly unforgiving of even tiny focus errors. (We are talking about linear movements in the 10-micron range!)
  2. Focus looks much better on second one. What are you using to focus? You might consider downloading starnet++ and trying out a star-elimination workflow. I suspect that you've got more data available in the low end, and might be able to stretch more nebulosity while retaining smaller, more colorful stars. An excellent start.
  3. You've no exposure adjustment dialed in for Av mode, I assume.
  4. Well, that's got it goin' ON. Nice detail and the colors look natural. I have been getting some pretty decent results out of Topaz DeNoise. I bought their original product and now really like the "AI" version. I particularly enjoy the split-frame/slider user interface, which shows the original on one side and the processed version on the other side of the divider, which you can move back and forth.
  5. I would love to help but I've only used Siril for stacking individual still frames.
  6. Well, the good news is that a lot of astro gear would be pretty hard to fence effectively. But it LOOKS expensive.
  7. Oh yeah, absolutely. Hydrogen acquisition is really robust to LP and moonlight so you can grab a crap-ton of it, on nights when nothing else would work, and so long as you're working on primarily emission targets it's really going to pay off. Heck, you might even find yourself seduced to the black and white side.
  8. Looks good! IMHO if you want to extend the nebulosity in this image your best bet is a bleep-ton more Ha.
  9. I doubt you need to worry too much about it. Close enough will be good enough in this case. More of an issue once you dive into narrowband, there's just a lot more light from hydrogen out there, for example. It also simplifies your workflow and processing if the exposures are the same. One set of darks suffices. You might even get away with one set of flats if you have REALLY CLEAN filt--no, forget I said that. Do it right and take All Teh Flatzes, is my advice.
  10. Ideally what you want is a cooled astro camera. Sorry. I know, we can't have all the toys we want. And DSLRs have sensors that most astro-camera folks envy. The Sony in my Pentax K5-iis is WAY better than the IMX183 in my nice cooled mono ASI. Since astro cameras are a tiny niche market you will spend through the nose for an APS-C sized sensor, especially one with the dynamic range and low noise typical of DSLRs. There. Is that better? May I be not flamed now?
  11. If you're starting out, the usual advice is to start short, i..e. wide-field. You will have challenges enough, and longer focal lengths will just exacerbate problems and impede your progress. Since optical design challenges pile up rapidly as you increase curvature, wider apertures mean longer focal lengths. Hence the time-honored 80mm refractor as a starting scope. 350 to 500mm F/L is long enough for a rich variety of great targets, yet short enough to fit big bright ones (Orion, Andromeda) and not drive you mad in the process.
  12. It's like the "dumb criminals" features on the late-night shows, I just can't get enough of these kinds of threads. SO comforting to know that I'm not the only one turning hours of time and thousands of quid worth of equipment into...bad language.
  13. Actually I find that with Astro Pixel Processor I'm spending very little more time processing the individual filters' outputs, at least up through calibration and integration. Once I tell it in the loading process which lights go with which filter (and, optionally, session), it just handles it and I get N integrated images, where N == number of filters. There is the process of compositing itself, of course, which is alien to OSC folks. But it seems most people do e.g. gradient reduction after compositing, rather than on the individual mono images. So yeah, it takes a little longer but I'm definitely one of the "hurts so good" crew.
  14. I don't think you'll be sorry you went with mono. The flexibility of doing RGB, LRGB, HaRGB, monochrome Ha, bicolor narrowband, and three-or-better-band narrowband...it's just the nuts IMO. Since I'm very much still learning, the ZWO filters are by no means the limiting reagent for me, YMMV. If I'd waiting until I could afford something better than their $130 NB filters I'd still be waiting, instead of imaging.
  15. Oh, sorry! Linear processing is what you do before "stretching", aka contrast enhancement. Usually you wind up with better results if you do as much work as possible on the image before that happens. Calibration is using special frames like darks (for subtracting the thermal signal the sensor records even when there's no light), bias (for subtracting the signal the camera adds to even the shortest-possible exposure), and flats (recording your optical train's and sensor's imperfections such as vignetting, dust, or funky pixels). Then there's integration, aka stacking, averaging a bunch of sub-exposures to yield a smooth, noise-free (hah!) image. Light pollution usually casts some kind of gradient across the image, which should be removed, along with any overall color cast. Then the next step would be stretching, or mapping the very narrow range of values that represent what you want to see in the final image (e.g., the tones in a nebula) onto the full range of values in that image. For example, the whole range of values from the darkest to the brightest parts of a nebula might only span a few hundred numbers in the calibrated, stacked image, but you want that to range from near-black to almost-white. That's the levels and curves part. Then comes more tweaking. You might apply extra stretching to particular bits, for example. Or do noise reduction. Or digital sharpening.
  16. As far as I can tell the D3400 is supported by the INDI library, which means you could run it with KStars and Ekos. That's a full-featured astronomy suite, runs on Macs and Linux too.
  17. I'm a big fan of Charles Bracken's book. If you're set on Photoshop, see if you can pick up a copy of the first edition of The Deep-Sky Imaging Primer, the (current) 2nd Edition does include a fair bit of info on Photoshop processing but concentrates more on PixInsight. 1st Ed. was heavier on PS, or so he says (I only have 2nd Ed.). I would heartily recommend that you do your linear processing in something other than Photoshop, though. I'm not even sure how you'd accomplish calibration with dark and flat frames in PS, for example. There are gradient-removal plugins, but really you'll get better images if you use a dedicated astro package to do calibration and integration at least. PS only has mean and median stacking, for example. the more-sophisticated algorithms in DSS, Astro Pixel Processor, Siril, or PixInsight will make better use of your sub-exposures.
  18. Under most conditions you can actually get away with less total integration time with LRGB than with OSC for equivalent results. You can short the color data in favor of concentrating on high-res (no debayering!) low-noise Ha or luminance. Or so say some experts I respect. Plus, as others note, the mono camera enables narrowband work down the road.
  19. This may be pretty orthogonal to your original request, since: I have the cooled version I have the mono version I'm running at 336mm focal length I'm strictly a DSO guy OK? Caveats noted. I deliberately chose the tiny-pixeled 183 because my DSLR had whacking great pixels and I wanted as distinct an alternative as I could feasibly obtain. So now I've got a wide-field, low-noise, OSC camera, and a narrower-field, fine-image-scale one. YMMV, and almost certainly does. Know that the IMX183 sensors are notorious for amp glow. It calibrates out, yes, but it does add complexity -- if you use darks at a slightly wrong temperature, or forget which way bias or dark-flat frames go, you'll be left with a very visible pattern on your image after stretching. This is an argument for ponying up the extra cash for a cooled camera, so that all your frames are at precisely the same temperature and dark current can be correctly subtracted from everything. For what it's worth, here are a couple of narrowband and an RGB image shot with the 183. Focus and guiding were both a lot better on the Heart than on the other two.
  20. Thanks! This is WAY more solid info than I had. I'm still a little hungry for something deeper than "this isn't a problem with our filters because reasons", but I don't doubt their assertion -- nothing like a repeatable experiment to back one up! Plenty good enough to guide any notional future purchases, now I'm merely curious. And I have to go home, take the cover off my filter wheel, and peer at the physical object -- ZWO's spec sheet says 7nm, not 12, and I'm reasonably confident that's the product I have. Duh.
  21. I've heard plenty of folklore -- and repeated it myself! -- about the relationship between filter bandpass and aperture ratio, but I'd love to hear some quantitative information. I know that the angle of incidence can dramatically affect how much light passes through an interference filter. But specific nanometers and f ratios, I have not seen. My optical train is f/4.8 according to Stellarvue, I'm using 1.25" 12-nm filters on a 1" sensor with 55mm back-focus. Vignetting certainly isn't egregious -- I mean if I stretch a flat till it screams I can certainly see some, but it calibrates out easily. Heck, I've got a 183, vignetting is the LEAST of my calibration problems! 🙂 (If you don't get that joke, do a Google image search for "IMX183 amp glow"...and marvel.)
  22. I should point out that I do understand that requirements may vary. For me, a laptop would have to have a 12-hour battery life to suffice as my main/only imaging computer. My multiday trips totally divorced from mains power are few, but they're CORE. The Pi-based gadgets have the advantage that the current-eating devices (i.e., those with screens) need only be operating while you're actively interacting. For the long hours of sequenced imaging, you can run your rig on the Pi's trickle of battery power. My Pi 4 gets its juice via a Waveshare stepper-motor control board, so I don't even have to have a converter -- just plug another barrel connector in. Sometimes when I've worked in the back yard, I've used the high-end MacBook Pro on which I develop software for a living. 2-second plate solves, I'm down with that! But then I have to worry about leaving it outside all night. If the Pi gets rained on or possum-gnawed, meh, it was $56.
  23. It really is a kick, isn't it? Orion is a great target, it's easy to get something amazing and you can spend a career getting something just a little bit better every time. I certainly haven't gotten an image of it yet that really satisfies me, but I've done a bunch I'm proud of nevertheless. And by "easy", please don't think I'm implying that you have a run-of-the-mill first result there. Yours is amazinger :-).
  24. Gonna put in a pitch for the Pi here. A Raspberry Pi 4b with 4 GB RAM will set you back all of US$56. It has plenty of snort to do sequencing, guiding, plate solving, autofocus, while running a planetarium program too. A Pi 3 like ASIAir will struggle a bit doing all that. You can purchase turnkey software for US$49 from Stellarmate, or go free from the ground up if you prefer. There are at least two known good platforms (Stellarmate and Astroberry) to choose from. For that matter, you can buy a Pi 3 and case with everything preloaded from SM for US$149. You can remote in wirelessly from a tablet, phone, or laptop, or connect an HDMI display and USB keyboard and mouse if you prefer. Whole thing will easily run all night off any battery that can power your mount and camera. I won't pretend I"ve never had any problems getting things working from time to time. (Few astronomers can!l But it largely Just Works.
  25. Dithering, in case the term isn't clear, is displacing the optical train by a few pixels' worth every so often, so the same pattern of photons isn't falling on exactly the same spots on the sensor for frame after frame. That can lead to noise artifacts ("walking noise" is commonly called out). It's frequently and conveniently used in conjunction with autoguiding because this is exactly what an autoguider does -- repoint the optics by a tiny bit (a fraction of a pixel) to compensate for mount imperfections or whatever during the exposure. The same software and hardware can be used between exposures to repoint by some number of pixels in a random direction, so that the resulting frames don't quite all line up and the noise artifacts are vanquished. Stacking software has to be able to align images anyway (frame-to-frame pointing is never perfect to begin with), so it's easy for it to deal with dithered images. It can be done manually by changing where the optics are pointing by a little bit between frames, whether that be via a mount's hand controller (mildly tedious) or by changing a ballhead's position by a fraction (SUPER tedious).
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.