Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

The Lazy Astronomer

Members
  • Posts

    952
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by The Lazy Astronomer

  1. I've done a couple of smaller (2 - 4 panel) mosaics both ways with no difference observable in the final product (to my eyes, at least), but my preference is to try and shoot a bit of each panel every night to average out the different atmospheric conditions and hopefully end up with a similar FWHM for each panel. As long as you know how to properly utilise the mosaic building feature in your processing software of choice, even quite extreme gradients can be overcome.
  2. Given that we've known about the environmental damages of fossil fuels for some considerable time, and as a collective group, we've made really very little progress in moving to other ways to produce our energy with terrestrial technology (for a multitude of reasons), I'm not going to be too worried about this. If it happens within the next 200 years, please feel free to reanimate my corpse and I'll happily eat my hat.
  3. The graphs are highlighting different aspects of the camera's performance. The read noise graph highlights the point at which the camera enters high conversion gain mode. The gain e-/adu graph highlights what is known as "unity gain", where 1 electron = 1 adu value. For deep sky, I would say, in general, unity gain is a good place to start off, and you can experiment with different gain settings later.
  4. Well guess who I've just seen on the tv! Nice little bit - longer than I expected it to be too. Good work. P.s. can l have your autograph?
  5. I've managed to get out this evening too and have a sequence running as l type. My goodness me, it is bright!
  6. Well there's your issue. The segmented body of the crustacean inherently introduces the potential for tilt into the system. 😁 On a less stupid note, how good are the mirror locks really? I would have thought it would only take a tiny movement to cause an issue, so I think conducting the test suggested by onikkinen would be a very worthwhile exercise.
  7. Ah, much flatter than my bodge job, but I think I still see a sort of bordering effect - in the inverted image above, a lighter area around the outside (I'm assuming stacking artifacts), then a thicker, darker border, most visible along the bottom and right hand edges. Oh yeah, sorry, I meant to specify which was which. It actually surprised me (although I guess it shouldn't have, really) just how similar they do appear to be. Background-wise, l usually go for a value in the region of 0.08 - 0.09, which is a bit brighter than the values you've mentioned above, so personal preference, I guess. That said, I think the values for what I've posted are more like 0.10 - 0.11ish, so they actually are a little bright.
  8. The thing that always gets me is that if you put Betelgeuse in the sun's place, it's outer layers would extend most of the way to Jupiter. I just cannot get my head around the notion of a single thing that large - and it's not even close to being the largest known star!
  9. Well first of all, you're far too self critical Rodd! Your image might not be what you'd envisioned, but there is some lovely crisp detail and the colour balance you've got presents very well indeed, in my opinion. I had a quick look at the pure luminance stack, and as I also had a stack of a similar integration time from my own attempt at M101, I've made a comparison between the two. I would agree with the above re: flats, there are what appear to be residuals of dust motes visible when the image is stretched hard enough to bring the fainter areas into visibility. The image also isn't quite flat - a couple of the corners and the two short edges are darker. Not significantly, but it becomes apparent after a stretch. I couldn't really get a good background model from DBE for your image - I think I probably modelled the gradient incorrectly on the initial iteration and in the end I've made a bit of a hash of it by running multiple iterations (there are quite visible brighter areas, particularly in the bottom right - maybe I'll try again later). For the purposes of the comparison though, it's good enough I think. I don't know what your pixel scale was, but I registered yours to mine (at 1.74"/pixel) with StarAlignment's default rescaling option, then cropped to match FOV, DBE, and some denoising. I used a couple iterations of GHS to stretch. The first GHS focused on bringing forwards the fainter regions, and as this tends to leave a rather flat, low-contrast image, a second GHS boosted the brighter regions a bit (for a finished image, I would usually spend a good long while dialling the best settings in, followed by some custom curves and other things like LHE, so this was a rather rough and ready go at it). I tried to match the stretches visually as best I could, although I was doing it on a laptop, not my usual processing machine. Close up of a faint arm: To my eye, all the same detail is there at similar brightness levels, and background is nice and smooth (DBE induced issues around the edges notwithstanding). I think you're just going to have to accept that if it's the faint stuff you want, noise reduction is going to be required to allow you to bring it out.
  10. Saw this pop up earlier. Good to see they've moved on from the previous giraffe-based measurement standard...
  11. Caveat: viewing on my phone, and only at resolution displayed as I view the post (i.e. not 100%) Other than the deeper reds, and the slight boost in the visibility of the fainter (red) Ha regions, I see no real difference in the two. Going on colour balance alone, l would say rgb only is my preference, but then I've taken a real liking to broadband only images over past few months, so maybe just my natural bias at play... Both lovely nonetheless ☺
  12. Your solution is to buy a RASA. Don't worry though, I'll take the current useless scope off your hands 😁 *Obviously I'm joking!!
  13. Did not read article. Saw the word rum. Don't care what it is - I'm in 🤣
  14. Well, the good news is, it's not your eyes 🤣 I'm afraid I have nothing useful to offer, other than: collimation?
  15. I would also say 12 hours total, and agree that this pretty much universal. I'd approach the split slightly differently though: instead of 4 x 3hrs for all channels, I'd do 6hrs L, and 2hrs each RGB.
  16. As above, really. That's no cause for concern, flats should calibrate them out entirely (as well as deal with the vignetting in the corners). I wouldn't spend any more time on it if l were you.
  17. I use an older version of NINA from 2021 too. As above, it all works, so no desire to upgrade (maybe soon though, for the advanced sequencer).
  18. As with anything, the most desirable telescope is the one I don't have 😁
  19. So what, if any, would be the most appropriate algorithm for deconvolution of stacked images?
  20. WBPP should automatically match them up, so you can just throw all the files in at once. You can check which calibration frames have been matched up to the lights in the calibration tab.
  21. Ever the pragmatist, @ollypenrice 😁 I wouldn't say l was agonising though, more just curious. I suppose I liked the certainty I got with an objective assessment afforded by an image analysis tool, in addition to the subjective one made by my eyes (and indeed, where the difference was too small to see visually, it was the only way l could judge it). I'll freely admit that, visually, l couldn't tell the difference between any of the bin1 images (I could, however, see an improvement in the [x2 binned] bin1 vs native bin2), but I have learned something interesting (well, I think it's interesting): (1) My image integration routine was adding a not insignificant amount of extra blurring, not present at capture, and (2) @vlaiv has shown me how to minimise this extra blurring with some pretty simple changes. Now, the key question: is it worth it? I'll let you know when I've had to buy ANOTHER storage drive 😄
  22. Is the vignetting even really that severe? Assume the above is stretched so it makes it look bad, but what are the actual differences in pixel values between the bright and dark regions?
  23. Anyone heard any word of potential mono 2600 duo? Thinking of moving to a dual scope setup with an OAG, but that could save me the hassle of buying and setting up an OAG.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.