Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

The Lazy Astronomer

Members
  • Posts

    952
  • Joined

  • Last visited

  • Days Won

    1

Posts posted by The Lazy Astronomer

  1. On 23/07/2023 at 17:59, sinbad40 said:

    Hi all,

    Just a quick question on taking multi panels.  Is it best to take a panel per night, or take a number of subs for each panel per night?  So if you have 8 hours capture during a night, and you are looking at doing say 4 panels, would you do a single panel a night or would you split each night and do 2 hours per panel.  Just trying to do some planning for winter.  My scopes are not fast at 6.3 - 7.5.

     

    Cheers all

     

    G

     

    I've done a couple of smaller (2 - 4 panel) mosaics both ways with no difference observable in the final product (to my eyes, at least), but my preference is to try and shoot a bit of each panel every night to average out the different atmospheric conditions and hopefully end up with a similar FWHM for each panel. 

    As long as you know how to properly utilise the mosaic building feature in your processing software of choice, even quite extreme gradients can be overcome.

    • Like 1
  2. Given that we've known about the environmental damages of fossil fuels for some considerable time, and as a collective group, we've made really very little progress in moving to other ways to produce our energy with terrestrial technology (for a multitude of reasons), I'm not going to be too worried about this. If it happens within the next 200 years, please feel free to reanimate my corpse and I'll happily eat my hat. 

  3. 2 hours ago, gonzostar said:

    Hi

    I have a ZWO ASI385MC camera.  Initially used for lunar and planetary imaging. However i want to explore deep sky a little more. Had some success with brighter objects. 

    On the graph in the manual I'm a little confused which gain settings i should use. The read noise drops off at 60, but on the other graph the gain e-adu  says 135. Do i use 60 or 135 or in between? I've being using again 60 with 30s subs with my set up. Does this sound about correct? Any words of wisdom will be greatly appreciated

    Cheers

    Dean 

    The graphs are highlighting different aspects of the camera's performance. 

    The read noise graph highlights the point at which the camera enters high conversion gain mode.

    The gain e-/adu graph highlights what is known as "unity gain", where 1 electron = 1 adu value.

    For deep sky, I would say, in general, unity gain is a good place to start off, and you can experiment with different gain settings later.

    • Like 1
  4. 15 hours ago, Rodd said:

    focusing with crayfish

    Well there's your issue. The segmented body of the crustacean inherently  introduces the potential for tilt into the system. 😁

    On a less stupid note, how good are the mirror locks really? I would have thought it would only take a tiny movement to cause an issue, so I think conducting the test suggested by onikkinen would be a very worthwhile exercise.

  5. 1 hour ago, wimvb said:

    Btw, the background of the L400 image is (after DBE) very flat. here is the superstretched inverted image.

    l400_clone1.thumb.jpg.9aa6e15e754a31a85e33ff7b743e43dc.jpg

    Ah, much flatter than my bodge job, but I think I still see a sort of bordering effect - in the inverted image above, a lighter area around the outside (I'm assuming stacking artifacts), then a thicker, darker border, most visible along the bottom and right hand edges.

    1 hour ago, Rodd said:

    I don't follow 100%  These two mono images are from where?  AHH I see the label--one is mine--processed by you.  I guess it is me after all--they look almost identical.  Not bad data I guess.  Sky is a bit bright.  Thanks for the input.  I am slowly pulling myself out of the hole.  I was sucked in to quicksand!

    Oh yeah, sorry, I meant to specify which was which. It actually surprised me (although I guess it shouldn't have, really) just how similar they do appear to be. 

    Background-wise, l usually go for a value in the region of 0.08 - 0.09, which is a bit brighter than the values you've mentioned above, so personal preference, I guess. That said, I think the values for what I've posted are more like 0.10 - 0.11ish, so they actually are a little bright.

    • Like 1
  6. The thing that always gets me is that if you put Betelgeuse in the sun's place, it's outer layers would extend most of the way to Jupiter. I just cannot get my head around the notion of a single thing that large - and it's not even close to being the largest known star!

    • Like 1
  7. Well first of all, you're far too self critical Rodd! Your image might not be what you'd envisioned, but there is some lovely crisp detail and the colour balance you've got presents very well indeed, in my opinion. 

    I had a quick look at the pure luminance stack, and as I also had a stack of a similar integration time from my own attempt at M101, I've made a comparison between the two. 

    I would agree with the above re: flats, there are what appear to be residuals of dust motes visible when the image is stretched hard enough to bring the fainter areas into visibility. The image also isn't quite flat - a couple of the corners and the two short edges are darker. Not significantly, but it becomes apparent after a stretch. 

    I couldn't really get a good background model from DBE for your image - I think I probably modelled the gradient incorrectly on the initial iteration and in the end I've made a bit of a hash of it by running multiple iterations (there are quite visible brighter areas, particularly in the bottom right - maybe I'll try again later). For the purposes of the comparison though, it's good enough I think.

    I don't know what your pixel scale was, but I registered yours to mine (at 1.74"/pixel) with StarAlignment's default rescaling option, then cropped to match FOV, DBE, and some denoising. I used a couple iterations of GHS to stretch. The first GHS focused on bringing forwards the fainter regions, and as this tends to leave a rather flat, low-contrast image, a second GHS boosted the brighter regions a bit (for a finished image, I would usually spend a good long while dialling the best settings in, followed by some custom curves and other things like LHE, so this was a rather rough and ready go at it). I tried to match the stretches visually as best I could, although I was doing it on a laptop, not my usual processing machine.

    M101Comparison1.thumb.jpg.a13c9a08cd50aa8ea506df2c903be5b7.jpg

     Close up of a faint arm:

    M101Comparison2.thumb.jpg.b65385ef08adca40eee0bb000e3b22c2.jpg

    To my eye, all the same detail is there at similar brightness levels, and background is nice and smooth (DBE induced issues around the edges notwithstanding). I think you're just going to have to accept that if it's the faint stuff you want, noise reduction is going to be required to allow you to bring it out.

  8. Caveat: viewing on my phone, and only at resolution displayed as I view the post (i.e. not 100%)

    Other than the deeper reds, and the slight boost in the visibility of the fainter (red) Ha regions, I see no real difference in the two.

    Going on colour balance alone, l would say rgb only is my preference, but then I've taken a real liking to broadband only images over past few months, so maybe just my natural bias at play... Both lovely nonetheless ☺

    • Like 1
  9. 3 hours ago, iwols said:

    Hi I have RGB subs all at 5 mins and 5 min darks I also have lum subs at 2 mins if I process them with 2 min darks in weighted batch processer do I have to do it twice,one for all the RGB and then for the lums or can I throw them in together with the 2mins darks and 5 mins darks and pi will know which darks the lum and RGB subs require if that makes sense thanks
    Edited by iwols, Today, 09:00 AM.
    Share  Share 

    WBPP should automatically match them up, so you can just throw all the files in at once. You can check which calibration frames have been matched up to the lights in the calibration tab.

    • Like 1
  10. Ever the pragmatist, @ollypenrice 😁

    I wouldn't say l was agonising though, more just curious. I suppose I liked the certainty I got with an objective assessment afforded by an image analysis tool, in addition to the subjective one made by my eyes (and indeed, where the difference was too small to see visually, it was the only way l could judge it).

    I'll freely admit that, visually, l couldn't tell the difference between any of the bin1 images (I could, however, see an improvement in the [x2 binned] bin1 vs native bin2), but I have learned something interesting (well, I think it's interesting):

    (1) My image integration routine was adding a not insignificant amount of extra blurring, not present at capture, and (2) @vlaiv has shown me how to minimise this extra blurring with some pretty simple changes.

    Now, the key question: is it worth it? I'll let you know when I've had to buy ANOTHER storage drive 😄

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.