Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

Filroden

Members
  • Posts

    1,373
  • Joined

  • Last visited

  • Days Won

    2

Everything posted by Filroden

  1. There is a slight loss of detail but the filter will let you do longer exposures (probably not much of a benefit to us) and should reduce noise when integrating as the noise in the background will be much reduced. I see that as a huge benefit.
  2. When you think that's only an hour! It's crystal clear with lots of globular clusters clearly visible. And I love the colour.
  3. Indeed. I think it's both feint and very small (and there is lots of dust in the area which I don't think I have any hope of capturing). Nonetheless, I'm please to get some colour differentiation across the nebula and this is the first time I think my flats have worked (I tried a new technique of putting up a white image on my large TV and using that as a light box, with sheets of paper over the scope to increase the exposure times to between 0.5 and 1.5 seconds).
  4. Indeed. I would always go back to the raw subs even if taken over many nights and stack them all together. If you were using darks, you may even need tabs for different nights as well as different exposure lengths as the temperatures could differ from one night to another. I have to admit I also cannot wrap my head around the math. In my simple brain I think it's like averaging averages. You don't know the weight/standard deviation of each average so you cannot be sure of the quality of the final result. Say you had two results: 999 of 1000 people all agreed the Earth was round from a survey combined with 1 out of 2 people agreed the Earth was round from a sample of posts in a forum If you just combined the results (99.9% and 50.0%) you would think only 74.75% of people thought the Earth was round. You're degrading your result with the worst sample. You'd actually need to weight the larger sample massively. However, when combining two already stacked images, DSS has no information over the size of each stack so cannot apply that weighting. P.S. And assuming the sampling was equally random in both results, the right approach would be to consider this as 1000 out of 1002 people thought the Earth was round, i.e. both data sets were combined before generating the result.
  5. Well, after over 2 1/2 hours of imaging, I calibrated and reviewed the subs this morning and had to discard most of them due to cloud cover. I suspect I've still included subs affected by feint clouds as the resulting image is very noisy. Worst affected was my RGB where I ended up with very little data (hence the subdued tones). This is only 45 mins of 30s L subs with about 5 mins each of RGB. I only processed it quickly as there isn't really enough data yet. Saturday is looking clearer and if so, I'll hopefully add much more data to this. So, here's my first attempt at NGC1333:
  6. Filroden

    NGC1333

    From the album: Ken's images

    Skywatcher Esprit 80 Celestron Evolution Alt/Az mount ZWO ASI1600MM-C with ZWO LRGB filters Imaging time was limited as there was lots of fast moving high clouds that reduced over 2.5 hours of data to under 1 hour 91 x 30s L 9 x 30s R 10 x 30s G 17 x 30s B Calibrated with bias, darks and flats (25 of each) Captured with SGPro, processed in PixInsight
  7. But why wouldn't you stack all light frames (regardless of exposure) in the same process to benefit from the greatest improvement to SNR? i cannot see a reason to combine two previously stacked images over stacking their constituent subs unless, by chance, both stacked images contained equal number of subs and had the same signal (which probably means they had the same exposure length). i combine all my images in a single integration, weighting subs with better SNR and FWHM/eccentricity). I have darks to calibrate subs of different lengths (they share common bias and flats). I'm sure DSS does something similar using the tabs at the bottom (lights and darks of same exposure are added to their own tabs?). the one occasion I have stacked already stacked images is to create a pseudo luminance image from separate RGB images. But typically my RGB images have similar number of subs behind them.
  8. Indeed! Dew doesn't seem to be a problem yet but high clouds are! Forecast suggests it will clear soon and my target is low so I'm hoping I will get something! My first subs show some nebulosity so fingers crossed. Spacing is my next challenge. I've only got it rough at the moment and I need to improve it.
  9. Finally I think I might have a clear evening! So, after my mount's wifi threw another wobbly (too many networks nearby interfere with it) and three alignments later, I'm imaging NGC1333. I think I may have lost my first 25 lum subs as my final alignment was slightly less accurate and my centre of fov has moved And just lost the next 3 because I forgot to set cooling again!
  10. I think Ian's summarised it. Once your exposure time gives enough signal then the main difference between many short vs few long exposures comes down to a trade off between two factors: accumulating read noise exceeding other noise (many short) vs worse SNR for fewer long. Of course, we all know many long exposures is the best option but we've got mounts to fight I don't know why the 5 x 90s improved your image so much other than to think that the shorter exposures were not giving you enough signal to get into that sweet spot.
  11. But wind/tracking reduces success rates. Longer exposures mean lower chances of success. That's why I've been sticking to 30 seconds as I'm at almost 100% for tracking.
  12. Those spike are amazing and you've got the orientation on M45 perfect. It's a little noisy but that's not surprising with on 20 mins of data. I can't wait for you to get more clear nights and add to it.
  13. It's always worth testing. Different scopes will give you different fields of view which might be better on some targets. Bear in mind that as focal length increases so does difficulty of tracking so you may need to watch your exposure times.
  14. Just going to give it a try now. I've previously purchased Topaz Simplify and Impression for normal photography but I've not tried Adjust (thinking it did the same thing as Lightroom's develop). However, it's effect on your images is spectacular, revealing far more detail and really extending the contrast. Might be a really useful final processing step.
  15. I know. It's looking clear here but I've just driven 300 miles and am shattered. I'm gonna use the excuse of a full moon to have an early night! But it's looking like tomorrow might also be clear so I can do some more testing of the new camera.
  16. I love the depth of field in your M42 image, Nige. It makes me think it looks like an ear listening to the universe! And I'd never have thought that diffraction spikes would rotate, so we've learnt something new about alt/az imaging. I guess we need to image for a limited period on each target over multiple nights, timed so that the frame is roughly aligned in each session (so earlier and earlier each session)? Alternatively, it's something else that needs to be processed afterwards? That said, I think your triple spikes look pretty unique and add something to the image. I also notice that your stars are incredible round! I guess the shorter exposures really help that. I've only imaged M42 when I first started and that was with the 9.25" beast. I think I was limited to 4 and 8 second subs at the time so I can't wait for it to become a late evening object this year for round two. Thinking of which, I think I only ever processed it through Photoshop and I wonder what Pixinsight might reveal?
  17. I knew you had more data! Nice blue arms and you can see the starts of the spiral quite close into the core, so well processed. And the beauty of astro-photography - you can keep revisiting the data, either by processing it again and again as you learn new techniques or by adding more data over time. Can't wait for your next attempt.
  18. The file size should not depend on the number of images stacked since the image does not contain any "history". It should just depend on the resolution of the camera. My file sizes (fits and tiffs) were all about 180Mb for the Canon EOS60d. Unless, as you say, drizzle has been used as this artificially inflates the resolution, and the file size.
  19. That's a really good improvement and just shows what flats can do to simplify processing. I think you've probably over-developed the image again, with the background looking very black, suggesting you've lost a lot of the fainter detail around the arms. I'd be expecting to see much more detail for almost two hours of integrated image. I believe DSS scores each image and you can set what % of images you take through stacking or you can manually deselect images. I don't know if it has an option to reject subs with scores less than a certain score, etc. I was also surprised by the lack of colour. With 30s subs you should not be saturating your stars. I wonder if you're missing a setting in either DSS or StarTools that is not debayering the subs correctly? It's always worth taking a single sub through a basic stretch. I've not used Gimp but I assume it has a levels function similar to Photoshop. If you load a single image and move the mid-point to the left (towards the black) until the image starts to appear, you should see what sort of detail/colour you could expect in your finished integration. If you're not seeing colour in the stars in the single sub then I suspect the RAW file is not being handled correctly. If you are seeing colour, then something is stopping it from carrying over into the integrated image. I've not used DSS for such a long time now that I can't help much more. As Ian said, I moved to Pixinsight for my stacking and processing. It has a very steep learning curve (I've had to read full books and follow pages of tutorials just to get to where I am) but it suits the way I think about an image more than DSS with StarTools or Photoshop. This is the single best tutorial I found: http://www.lightvortexastronomy.com/tutorial-example-m31-andromeda-galaxy---dslr.html and the bonus is that it used M31 as its subject and images from a DSLR so everything should be applicable to your images. It takes many hours to run the full tutorial and the first few times I was following it blindly not knowing when or by how much to tweak some of the settings, but I still saw a noticeable improvement in my images compared to levels/curves adjustments in Photoshop.
  20. I think you've caught more than just a hint of the spirals. Here's a quick stretch in Pixinsight. The quality isn't great as I'm working from the jpeg but it shows there's a lot more data in there even for such a short number of subs. With some more subs they should really start to become clear. I also found using curves as the main stretch quite difficult. In the end, having watched a few YouTube videos, I used Photoshop and applied many small S curves to bring out the detail rather than big steps, which tended to saturate the bright areas (and in M31's case, the core is easily saturated). Good to see such a promising first image from the new set up!
  21. That's stunning. You really captured the Ha regions in the arms. I made the same mistake as you. I started imaging when it was still not quite astro-dark and it was also lower in the sky giving a double whammy of background but as the night went on my images also improved. It's so tempting to just start imaging as soon as you're aligned and it looks dark but I guess 30 minutes back inside with a coffee might make the night last longer!
  22. Thank you! I think it's my best yet. I spent a little longer getting the masks right. Also, I've since read I shouldn't be using a super bias (a process you can run to turn an ordinary bias into one that estimates what a 1000+ bias would look like). It seems the process preserves vertical detail but destroys any other structure in the bias so I've taken a new master bias today with 256 images and it looks much better. I also took the time to take a fresh batch of 60 darks which show the hot pixels much better than my previous attempt. I love your M31 core. You've captured some great detail coming out of and around the core. As to colour, it's there in the halos but it looks like the stars have been highlight clipped during process so their cores have all gone white. Is there a way to protect that stars during development? I do a masked stretch on my images which helps protect the brightest parts of the image from over-exposing. It does lead to an initial low contrast image but with masks I can then start to bring back some contrast. Maybe there's something similar in StarTools. I'd I'll never get tired of seeing M33. It's just about the right size for our field of views and bright enough for us to capture a lot of subs over time and really bring out the detail in the arms. Fingers crossed you can repair the 1200D and have a modded camera soon.
  23. You can study variables with the naked eye. I did a project over about three to six months when I was at uni where I visually estimated the magnitude of Delta Ceph using the two nearby comparator stars from Central London (though I had to go to Regents Park or other open spaces to see it). When plotted and reduced, my results were very close to that measured with much more sensitive equipment. Like us though, I had to stack many observations to achieve what good equipment could have achieved with relatively few observations. I think Ian's right. Anything we could study with our equipment has probably been studied in depth before so it would only be for curiosity rather than science. Mag 16/17 is probably at the limits of our equipment in our locations. We'd need really good seeing or some way to compensate for seeing if we were using measurements of photographs to determine relative magnitudes, good guiding (or consistently bad guiding!), and we'd probably need the star to exceed a single pixel? All probably beyond our equipment. Not impossible, but it probably has to be of real interest to you to pursue given the time investment.
  24. Thank you. I agree. I think I introduced a green tint on the cropped version which wasn't obvious on my PC but is clearer on the phone screen. I also realised I hadn't checked my individual subs for the Soul Nebula. I took them after 22:00 and knowing now my mount starts to lock above 59deg and I didn't like my image because of its trails... Well, no surprise but there were many bad subs so I've removed them and am reintegrating them now.
  25. Well, you asked for something in red... Here's my limited attempt at NGC281, Pacman Nebula. It rose above my mount's altitude limited very quickly (I'd not taken into account my cables so by 21:10 it was already too high at about 59deg). Still, I got 24 mins of L and between 11 and 12 mins each of RGB. To try and compensate I created a super luminous frame using data from L and RGB. It was a little better than L on its own, so I went with it. Anyway, here's my attempt, including an annotated version. It's a little noisier than I'd like but I can't complain for 24 mins of L data. Hopefully I can get another clear night before the moon comes up and it rises above my limit. And just for Ian, a cropped version with some minor Photoshop tweaks.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.