-
Posts
952 -
Joined
-
Last visited
-
Days Won
1
Content Type
Profiles
Forums
Gallery
Events
Blogs
Posts posted by The Lazy Astronomer
-
-
Given that we've known about the environmental damages of fossil fuels for some considerable time, and as a collective group, we've made really very little progress in moving to other ways to produce our energy with terrestrial technology (for a multitude of reasons), I'm not going to be too worried about this. If it happens within the next 200 years, please feel free to reanimate my corpse and I'll happily eat my hat.
-
4 hours ago, gonzostar said:
Thankyou for reply a little clearer. So in my camera's case the unity gain is 60?
Cheers
Dean
Unity for your camera is 135.
- 1
-
2 hours ago, gonzostar said:
Hi
I have a ZWO ASI385MC camera. Initially used for lunar and planetary imaging. However i want to explore deep sky a little more. Had some success with brighter objects.
On the graph in the manual I'm a little confused which gain settings i should use. The read noise drops off at 60, but on the other graph the gain e-adu says 135. Do i use 60 or 135 or in between? I've being using again 60 with 30s subs with my set up. Does this sound about correct? Any words of wisdom will be greatly appreciated
Cheers
Dean
The graphs are highlighting different aspects of the camera's performance.
The read noise graph highlights the point at which the camera enters high conversion gain mode.
The gain e-/adu graph highlights what is known as "unity gain", where 1 electron = 1 adu value.
For deep sky, I would say, in general, unity gain is a good place to start off, and you can experiment with different gain settings later.
- 1
-
Well guess who I've just seen on the tv!
Nice little bit - longer than I expected it to be too. Good work.
P.s. can l have your autograph?
- 1
-
20 minutes ago, carastro said:
Set up an infrared webcam and catch him red handed.
Infrared-handed, if you will.
😁
- 2
-
I've managed to get out this evening too and have a sequence running as l type. My goodness me, it is bright!
-
15 hours ago, Rodd said:
focusing with crayfish
Well there's your issue. The segmented body of the crustacean inherently introduces the potential for tilt into the system. 😁
On a less stupid note, how good are the mirror locks really? I would have thought it would only take a tiny movement to cause an issue, so I think conducting the test suggested by onikkinen would be a very worthwhile exercise.
-
1 hour ago, wimvb said:
Ah, much flatter than my bodge job, but I think I still see a sort of bordering effect - in the inverted image above, a lighter area around the outside (I'm assuming stacking artifacts), then a thicker, darker border, most visible along the bottom and right hand edges.
1 hour ago, Rodd said:I don't follow 100% These two mono images are from where? AHH I see the label--one is mine--processed by you. I guess it is me after all--they look almost identical. Not bad data I guess. Sky is a bit bright. Thanks for the input. I am slowly pulling myself out of the hole. I was sucked in to quicksand!
Oh yeah, sorry, I meant to specify which was which. It actually surprised me (although I guess it shouldn't have, really) just how similar they do appear to be.
Background-wise, l usually go for a value in the region of 0.08 - 0.09, which is a bit brighter than the values you've mentioned above, so personal preference, I guess. That said, I think the values for what I've posted are more like 0.10 - 0.11ish, so they actually are a little bright.
- 1
-
The thing that always gets me is that if you put Betelgeuse in the sun's place, it's outer layers would extend most of the way to Jupiter. I just cannot get my head around the notion of a single thing that large - and it's not even close to being the largest known star!
- 1
-
Well first of all, you're far too self critical Rodd! Your image might not be what you'd envisioned, but there is some lovely crisp detail and the colour balance you've got presents very well indeed, in my opinion.
I had a quick look at the pure luminance stack, and as I also had a stack of a similar integration time from my own attempt at M101, I've made a comparison between the two.
I would agree with the above re: flats, there are what appear to be residuals of dust motes visible when the image is stretched hard enough to bring the fainter areas into visibility. The image also isn't quite flat - a couple of the corners and the two short edges are darker. Not significantly, but it becomes apparent after a stretch.
I couldn't really get a good background model from DBE for your image - I think I probably modelled the gradient incorrectly on the initial iteration and in the end I've made a bit of a hash of it by running multiple iterations (there are quite visible brighter areas, particularly in the bottom right - maybe I'll try again later). For the purposes of the comparison though, it's good enough I think.
I don't know what your pixel scale was, but I registered yours to mine (at 1.74"/pixel) with StarAlignment's default rescaling option, then cropped to match FOV, DBE, and some denoising. I used a couple iterations of GHS to stretch. The first GHS focused on bringing forwards the fainter regions, and as this tends to leave a rather flat, low-contrast image, a second GHS boosted the brighter regions a bit (for a finished image, I would usually spend a good long while dialling the best settings in, followed by some custom curves and other things like LHE, so this was a rather rough and ready go at it). I tried to match the stretches visually as best I could, although I was doing it on a laptop, not my usual processing machine.
Close up of a faint arm:
To my eye, all the same detail is there at similar brightness levels, and background is nice and smooth (DBE induced issues around the edges notwithstanding). I think you're just going to have to accept that if it's the faint stuff you want, noise reduction is going to be required to allow you to bring it out.
-
-
Caveat: viewing on my phone, and only at resolution displayed as I view the post (i.e. not 100%)
Other than the deeper reds, and the slight boost in the visibility of the fainter (red) Ha regions, I see no real difference in the two.
Going on colour balance alone, l would say rgb only is my preference, but then I've taken a real liking to broadband only images over past few months, so maybe just my natural bias at play... Both lovely nonetheless ☺
- 1
-
Your solution is to buy a RASA. Don't worry though, I'll take the current useless scope off your hands 😁
*Obviously I'm joking!!
-
Did not read article. Saw the word rum. Don't care what it is - I'm in 🤣
- 1
-
Well, the good news is, it's not your eyes 🤣
I'm afraid I have nothing useful to offer, other than: collimation?
-
I would also say 12 hours total, and agree that this pretty much universal. I'd approach the split slightly differently though: instead of 4 x 3hrs for all channels, I'd do 6hrs L, and 2hrs each RGB.
- 2
-
As above, really. That's no cause for concern, flats should calibrate them out entirely (as well as deal with the vignetting in the corners). I wouldn't spend any more time on it if l were you.
-
I use an older version of NINA from 2021 too. As above, it all works, so no desire to upgrade (maybe soon though, for the advanced sequencer).
-
As with anything, the most desirable telescope is the one I don't have 😁
- 1
- 1
-
So what, if any, would be the most appropriate algorithm for deconvolution of stacked images?
-
3 hours ago, iwols said:
Hi I have RGB subs all at 5 mins and 5 min darks I also have lum subs at 2 mins if I process them with 2 min darks in weighted batch processer do I have to do it twice,one for all the RGB and then for the lums or can I throw them in together with the 2mins darks and 5 mins darks and pi will know which darks the lum and RGB subs require if that makes sense thanks
Edited by iwols, Today, 09:00 AM.
Share ShareWBPP should automatically match them up, so you can just throw all the files in at once. You can check which calibration frames have been matched up to the lights in the calibration tab.
- 1
-
Ever the pragmatist, @ollypenrice 😁
I wouldn't say l was agonising though, more just curious. I suppose I liked the certainty I got with an objective assessment afforded by an image analysis tool, in addition to the subjective one made by my eyes (and indeed, where the difference was too small to see visually, it was the only way l could judge it).
I'll freely admit that, visually, l couldn't tell the difference between any of the bin1 images (I could, however, see an improvement in the [x2 binned] bin1 vs native bin2), but I have learned something interesting (well, I think it's interesting):
(1) My image integration routine was adding a not insignificant amount of extra blurring, not present at capture, and (2) @vlaiv has shown me how to minimise this extra blurring with some pretty simple changes.
Now, the key question: is it worth it? I'll let you know when I've had to buy ANOTHER storage drive 😄
-
Is the vignetting even really that severe? Assume the above is stretched so it makes it look bad, but what are the actual differences in pixel values between the bright and dark regions?
-
Anyone heard any word of potential mono 2600 duo? Thinking of moving to a dual scope setup with an OAG, but that could save me the hassle of buying and setting up an OAG.
Taking multi panel images
in Getting Started With Imaging
Posted
I've done a couple of smaller (2 - 4 panel) mosaics both ways with no difference observable in the final product (to my eyes, at least), but my preference is to try and shoot a bit of each panel every night to average out the different atmospheric conditions and hopefully end up with a similar FWHM for each panel.
As long as you know how to properly utilise the mosaic building feature in your processing software of choice, even quite extreme gradients can be overcome.