Jump to content

Filroden

Members
  • Posts

    1,373
  • Joined

  • Last visited

  • Days Won

    2

Posts posted by Filroden

  1. 1 minute ago, The Admiral said:

    It's coming on, but as you say you are rather limited in data at the moment. It looks like it's a tough target though, and quite small.

    Ian

    Indeed. I think it's both feint and very small (and there is lots of dust in the area which I don't think I have any hope of capturing). Nonetheless, I'm please to get some colour differentiation across the nebula and this is the first time I think my flats have worked (I tried a new technique of putting up a white image on my large TV and using that as a light box, with sheets of paper over the scope to increase the exposure times to between 0.5 and 1.5 seconds).

  2. 11 minutes ago, The Admiral said:

    Perhaps on reflection you are right. I think I was confusing using different groups as tantamount to having separate stacks (they would need to be put into separate groups because they would need to be associated with the appropriate length darks) . Presumably that's not the case and that they are all essentially stacked together. Presumably you are also saying that if you produced different stacks on separate nights, that you would re-stack the lot together rather than combining the separate results? PixInsight does give one greater opportunity to finesse these things though, which of course is why it is quite complex.

    It still leaves me a bit confused about the DSS statement though.

    Ian

     

    Indeed. I would always go back to the raw subs even if taken over many nights and stack them all together. If you were using darks, you may even need tabs for different nights as well as different exposure lengths as the temperatures could differ from one night to another.

    I have to admit I also cannot wrap my head around the math. In my simple brain I think it's like averaging averages. You don't know the weight/standard deviation of each average so you cannot be sure of the quality of the final result. Say you had two results:

    999 of 1000 people all agreed the Earth was round from a survey

    combined with

    1 out of 2 people agreed the Earth was round from a sample of posts in a forum

    If you just combined the results (99.9% and 50.0%) you would think only 74.75% of people thought the Earth was round. You're degrading your result with the worst sample. You'd actually need to weight the larger sample massively. However, when combining two already stacked images, DSS has no information over the size of each stack so cannot apply that weighting.

    P.S. And assuming the sampling was equally random in both results, the right approach would be to consider this as 1000 out of 1002 people thought the Earth was round, i.e. both data sets were combined before generating the result.

    • Like 1
  3. Well, after over 2 1/2 hours of imaging, I calibrated and reviewed the subs this morning and had to discard most of them due to cloud cover. I suspect I've still included subs affected by feint clouds as the resulting image is very noisy. Worst affected was my RGB where I ended up with very little data (hence the subdued tones).

    This is only 45 mins of 30s L subs with about 5 mins each of RGB. I only processed it quickly as there isn't really enough data yet. Saturday is looking clearer and if so, I'll hopefully add much more data to this.

    So, here's my first attempt at NGC1333:

    large.NGC1333_20161026_v2.jpg

    • Like 2
  4. But why wouldn't you stack all light frames (regardless of exposure) in the same process to benefit from the greatest improvement to SNR?

    i cannot see a reason to combine two previously stacked images over stacking their constituent subs unless, by chance, both stacked images contained equal number of subs and had the same signal (which probably means they had the same exposure length).

    i combine all my images in a single integration, weighting subs with better SNR and FWHM/eccentricity). I have darks to calibrate subs of different lengths (they share common bias and flats). I'm sure DSS does something similar using the tabs at the bottom (lights and darks of same exposure are added to their own tabs?).

    the one occasion I have stacked already stacked images is to create a pseudo luminance image from separate RGB images. But typically my RGB images have similar number of subs behind them.

    • Like 1
  5. Indeed! Dew doesn't seem to be a problem yet but high clouds are! Forecast suggests it will clear soon and my target is low so I'm hoping I will get something! My first subs show some nebulosity so fingers crossed. 

    Spacing is my next challenge. I've only got it rough at the moment and I need to improve it.

    • Like 2
  6. Finally I think I might have a clear evening! So, after my mount's wifi threw another wobbly (too many networks nearby interfere with it) and three alignments later, I'm imaging NGC1333. I think I may have lost my first 25 lum subs as my final alignment was slightly less accurate and my centre of fov has moved :(

    And just lost the next 3 because I forgot to set cooling again!

  7. I think Ian's summarised it. Once your exposure time gives enough signal then the main difference between many short vs few long exposures comes down to a trade off between two factors: accumulating read noise exceeding other noise (many short) vs worse SNR for fewer long. Of course, we all know many long exposures is the best option but we've got mounts to fight :)

    I don't know why the 5 x 90s improved your image so much other than to think that the shorter exposures were not giving you enough signal to get into that sweet spot.

    • Like 1
  8. Just going to give it a try now. I've previously purchased Topaz Simplify and Impression for normal photography but I've not tried Adjust (thinking it did the same thing as Lightroom's develop). However, it's effect on your images is spectacular, revealing far more detail and really extending the contrast. Might be a really useful final processing step.

    • Like 2
  9. I love the depth of field in your M42 image, Nige. It makes me think it looks like an ear listening to the universe! And I'd never have thought that diffraction spikes would rotate, so we've learnt something new about alt/az imaging. I guess we need to image for a limited period on each target over multiple nights, timed so that the frame is roughly aligned in each session (so earlier and earlier each session)? Alternatively, it's something else that needs to be processed afterwards? That said, I think your triple spikes look pretty unique and add something to the image.

    I also notice that your stars are incredible round! I guess the shorter exposures really help that. I've only imaged M42 when I first started and that was with the 9.25" beast. I think I was limited to 4 and 8 second subs at the time so I can't wait for it to become a late evening object this year for round two. Thinking of which, I think I only ever processed it through Photoshop and I wonder what Pixinsight might reveal?

    • Like 2
  10. 12 hours ago, parallaxerr said:

    Progress :)

    OK so I selected standard stacking instead of mosaic which knocked 100MB off the FITs file, which then opened instantly in ST. Also set RGB background calibration and got colour! However, I then read an ST instruction which says to set for no colour calibration so trying without that setting now.

    Anyhoo, a bit rough and noisy but there's colour

    I knew you had more data! Nice blue arms and  you can see the starts of the spiral quite close into the core, so well processed. And the beauty of astro-photography - you can keep revisiting the data, either by processing it again and again as you learn new techniques or by adding more data over time.

    Can't wait for your next attempt.

    • Like 1
  11. The file size should not depend on the number of images stacked since the image does not contain any "history". It should just depend on the resolution of the camera. My file sizes (fits and tiffs) were all about 180Mb for the Canon EOS60d.

    Unless, as you say, drizzle has been used as this artificially inflates the resolution, and the file size.

    • Like 2
  12. That's a really good improvement and just shows what flats can do to simplify processing. I think you've probably over-developed the image again, with the background looking very black, suggesting you've lost a lot of the fainter detail around the arms. I'd be expecting to see much more detail for almost two hours of integrated image. I believe DSS scores each image and you can set what % of images you take through stacking or you can manually deselect images. I don't know if it has an option to reject subs with scores less than a certain score, etc.

    I was also surprised by the lack of colour. With 30s subs you should not be saturating your stars. I wonder if you're missing a setting in either DSS or StarTools that is not debayering the subs correctly?

    It's always worth taking a single sub through a basic stretch. I've not used Gimp but I assume it has a levels function similar to Photoshop. If you load a single image and move the mid-point to the left (towards the black) until the image starts to appear, you should see what sort of detail/colour you could expect in your finished integration. If you're not seeing colour in the stars in the single sub then I suspect the RAW file is not being handled correctly. If you are seeing colour, then something is stopping it from carrying over into the integrated image. I've not used DSS for such a long time now that I can't help much more.

    As Ian said, I moved to Pixinsight for my stacking and processing. It has a very steep learning curve (I've had to read full books and follow pages of tutorials just to get to where I am) but it suits the way I think about an image more than DSS with StarTools or Photoshop. This is the single best tutorial I found: http://www.lightvortexastronomy.com/tutorial-example-m31-andromeda-galaxy---dslr.html and the bonus is that it used M31 as its subject and images from a DSLR so everything should be applicable to your images. It takes many hours to run the full tutorial and the first few times I was following it blindly not knowing when or by how much to tweak some of the settings, but I still saw a noticeable improvement in my images compared to levels/curves adjustments in Photoshop.

  13. I think you've caught more than just a hint of the spirals. Here's a quick stretch in Pixinsight. The quality isn't great as I'm working from the jpeg but it shows there's a lot more data in there even for such a short number of subs. With some more subs they should really start to become clear.

    I also found using curves as the main stretch quite difficult. In the end, having watched a few YouTube videos, I used Photoshop and applied many small S curves to bring out the detail rather than big steps, which tended to saturate the bright areas (and in M31's case, the core is easily saturated).

    Good to see such a promising first image from the new set up!

    M31-161006-2.jpg.5a6faf200d58f2ef66bfda7afbde6fda.jpg

    • Like 3
  14. 8 minutes ago, The Admiral said:

    Last Monday I had a go at imaging M33 (Ken, you're to blame for that :wink2:). The trouble is, I'm not good at this late night thing, so I didn't really image at the best times. Still, this is what I got.

    I was debating whether or not to image using my 0.79x reducer/flattener, but in the end I went with the native FL as I reasoned it would cover more of the sensor and thus require less cropping. Put it in to DSS and it threw up scores of between -5.25 and 567! With my reducer they normally top the 2000's, but hey ho. So I stacked all with a score >50 (234 x 30s subs), and after doing what I could in ST, I was just not happy. The image looked a bit out of focus, though it has to be said that individual subs looked fine. So this time round, I went through the subs one-by-one and weeded out those where I thought there was a bit of trailing. That brought it down to 199 subs. I registered and stacked those, and this is the result of that. It was quite interesting going through them in time order, because they started out with a coffee colour at about 8.30pm and ended up a dark grey by 11.20pm. I really needed to have started and finished imaging later. Perhaps I'll have another go in a few weeks time, when it is better placed earlier in the evening, and use my reducer. Let's hope for some clear skies when the Moon's not up! I feel that I'm pushing the data a bit too hard here, but needs must! Not sure what to do about magenta stars, always get them*.

    Ian

    *Edit. I've found I can either reduce their saturation to zero, or turn them blue, in Lightroom.

    That's stunning. You really captured the Ha regions in the arms. I made the same mistake as you. I started imaging when it was still not quite astro-dark and it was also lower in the sky giving a double whammy of background but as the night went on my images also improved. It's so tempting to just start imaging as soon as you're aligned and it looks dark but I guess 30 minutes back inside with a coffee might make the night last longer!

    • Like 2
  15. 5 minutes ago, Nigel G said:

    Ken, Nice Pacman image , some good detail there and decent colours.

    Steve, The Californian nebula looks good, I wonder how that would turn out with a modded camera.

    Well done guy's

    Here's an interesting thing. Last night I took about 45 minutes worth on M33 between the clouds. with darks and flats, the image was not that good (not as nice as my first attempt a while back) so I thought I'd try stacking all my data together. The first batch was around 45 minutes of 30s taken with my 1200D the second batch including the flats darks and bias came from my wife's :icon_biggrin: 1300D. There was around 30 degree change in the frame rotation between the sets of images so quite a big crop ( if you leave any stacking artefacts in the image its very difficult to process)

    The result was the easiest one of the 3 to process with a better result which has really surprised me. Also I finally processed a stack of M31 I've had knocking around for a while, heavily cropped.

    Still struggling with colours

    Nige.

    PS. spare parts for my 1200D are coming within the next week, fingers crossed.

    Thank you! I think it's my best yet. I spent a little longer getting the masks right. Also, I've since read I shouldn't be using a super bias (a process you can run to turn an ordinary bias into one that estimates what a 1000+ bias would look like). It seems the process preserves vertical detail but destroys any other structure in the bias so I've taken a new master bias today with 256 images and it looks much better. I also took the time to take a fresh batch of 60 darks which show the hot pixels much better than my previous attempt.

    I love your M31 core. You've captured some great detail coming out of and around the core.

    As to colour, it's there in the halos but it looks like the stars have been highlight clipped during process so their cores have all gone white. Is there a way to protect that stars during development? I do a masked stretch on my images which helps protect the brightest parts of the image from over-exposing. It does lead to an initial low contrast image but with masks I can then start to bring back some contrast. Maybe there's something similar in StarTools.

    I'd I'll never get tired of seeing M33. It's just about the right size for our field of views and bright enough for us to capture a lot of subs over time and really bring out the detail in the arms.

    Fingers crossed you can repair the 1200D and have a modded camera soon.

    • Like 2
  16. 15 minutes ago, The Admiral said:

    Well Herzy, I don't know anything really about variable star monitoring, but I guess it will all come down to its magnitude as to whether it is accessible to amateurs. I can't help thinking that the brighter ones have already been studied in depth, so I'm not sure what can be added to our scientific understanding by amateurs with basic gear. Equally, I could be totally wrong!

    Ian

    You can study variables with the naked eye. I did a project over about three to six months when I was at uni where I visually estimated the magnitude of Delta Ceph using the two nearby comparator stars from Central London (though I had to go to Regents Park or other open spaces to see it). When plotted and reduced, my results were very close to that measured with much more sensitive equipment. Like us though, I had to stack many observations to achieve what good equipment could have achieved with relatively few observations.

    I think Ian's right. Anything we could study with our equipment has probably been studied in depth before so it would only be for curiosity rather than science. Mag 16/17 is probably at the limits of our equipment in our locations. We'd need really good seeing or some way to compensate for seeing if we were using measurements of photographs to determine relative magnitudes, good guiding (or consistently bad guiding!), and we'd probably need the star to exceed a single pixel? All probably beyond our equipment.

    Not impossible, but it probably has to be of real interest to you to pursue given the time investment.

    • Like 2
  17. 2 minutes ago, The Admiral said:

    That's a lovely image Ken, a lovely star field with small tight stars. Thanks for the cropped version! Actually, on my PC screen it it's not so essential because I can see sufficient detail, but on my tablet it helps these tired old eyes. I think I prefer the un-cropped image for colour though.

    Ian

    Thank you. I agree. I think I introduced a green tint on the cropped version which wasn't obvious on my PC but is clearer on the phone screen. 

    I also realised I hadn't checked my individual subs for the Soul Nebula. I took them after 22:00 and knowing now my mount starts to lock above 59deg and I didn't like my image because of its trails...

    Well, no surprise but there were many bad subs so I've removed them and am reintegrating them now. 

  18. Well, you asked for something in red...

    Here's my limited attempt at NGC281, Pacman Nebula. It rose above my mount's altitude limited very quickly (I'd not taken into account my cables so by 21:10 it was already too high at about 59deg). Still, I got 24 mins of L and between 11 and 12 mins each of RGB. To try and compensate I created a super luminous frame using data from L and RGB. It was a little better than L on its own, so I went with it.

    Anyway, here's my attempt, including an annotated version.

    large.NGC281_20161004_v2.jpg

    It's a little noisier than I'd like but I can't complain for 24 mins of L data. Hopefully I can get another clear night before the moon comes up and it rises above my limit.

    NGC281_20161004_v2_Annotated.jpg

    And just for Ian, a cropped version with some minor Photoshop tweaks.

    NGC281_20161004_v3.jpg

    • Like 6
  19. 45 minutes ago, Nigel G said:

    Ken, you have made a better job than I :icon_biggrin:

    both images were stacked the same, same flats, darks & bias, processed pretty much the same, subs were taken exactly same settings and m45 straight after ngc 6992.

    I have done a screen grab from both just after first develop , the difference is incredible , 2 images could not be further apart. I'm trying different settings in DSS.

    There's no difference if I check 1st or 2nd option in StarTools.

    Any ideas folks.

    I still can't see how they can be so different. No wonder I'm having trouble processing with images like this to work with.

    Cheers

    Nige.

    :help:

     

    Very odd indeed though my backgrounds typically start green and I let BackgroundNeutralisation and ColourCalibration work their magic. I think it works in a similar way to auto-fixing white balance. You show it a known area of neutral background (akin to picking a known grey shade in daylight photography) and it then balances the colours. I therefore wouldn't be worried about green backgrounds if they can be corrected in software.

    However, to have two such very different results! The big difference between the two images is their altitude. M45 would have been low and in light pollution maybe (making it very red heavy before correction) and the Veil is very high. Maybe that's all it is?

    In DSS, do you select the option in the stacking settings that balances the colours (think it's at the bottom of the first tab)?

  20. 19 minutes ago, parallaxerr said:

    Hi everyone,

    Well, I've just finished reading the contents of this thread and I must say I'm thoroughly impressed with the quality of images here! I never thought it would be possible to creat images like these with the sort of equipment being used, I have obviously been brainwashed by the EQ only crowd :)

    I was pointed in this direction when @happy-kat put me onto @The Admiral after disussing something about my Nexstar SE mount in another thread (I forget exactly what, now). Having spoken to Ian via PM I found myself being gently encouraged to have a go at Alt/Az imaging.

    So...this is to say hello and I hope to join the party soon, as I've placed an order for all necessary bits to get my camera on the end of a scope (highlighted blue below)!

    My setup will consist of:

    • Celestron Nexstar SE 6/8 mount
    • William Optics Zenith Star 66SD doublet Apo
    • Baader SCT to 2" click-lock
    • Baader Multi Purpose Coma Corrector (proven to work rather nicely with the 66SD here - http://www.stark-labs.com/craig/WO66SD/WO66SD.html)
    • Nikon T-ring
    • Nikon D3200 DSLR

    Here's to hopefully contributing my first images before too long and quizzing you all on exactly where I'm going wrong! Looking forward to this little foray into imaging!

    Jon

    Welcome Jon. That WO66SD should give a nice field of view. Can't wait to see your results. Do you have some targets in mind already?

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.