Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

Filroden

Members
  • Posts

    1,373
  • Joined

  • Last visited

  • Days Won

    2

Everything posted by Filroden

  1. From the album: Ken's images

    Taken on the evenings of 7 January 2018 (5 subs) and 9 February 2018 (25 subs). Equipment: Skywatcher Esprit 80 with the ZWO 1600MM-C using an Astrodon 3nm Ha filter mounted on the AVX with OAG guiding Exposures: 30 x 240s Ha at unity gain Processed: Pixinsight and Photoshop

    © Ken Dearman 2018

  2. That's about 3 times better than I'm getting out of the AVX at its best!
  3. I calibrate my lights (bias, dark, flat) after every session. I store these as my "input" files (suffixed with a "_c" to show they are calibrated and with a "_cc" if I have also cosmetically corrected them for hot/cold pixels. If I image the same target, I just add more files into that target's folder. I can then, at any stage, realign and stack all the images. It also means I don't have to recalibrate and use different flats for different batches. It also avoids the need for stacking the stacks. It is possible, but you're losing some of the benefits of stacking larger numbers of lights. It does take longer because as your library of lights grows, so does the time it takes to align and stack them. It also requires more storage as you're saving every light and not just the stacks (but I can never bring myself to delete original data).
  4. From the album: Ken's images

    Taken on the evenings of 27 and 28 December 2017 . Equipment: Skywatcher Esprit 80 with the ZWO 1600MM-C using an Astrodon 3nm Ha filter and ZWO RGB filters mounted on the AVX with OAG guiding Exposures: 36 x 240s Ha, 20 x 60s B&G and 10 x 60s R at unity gain Processed: Pixinsight and Photoshop

    © Ken Dearman

  5. From the album: Ken's images

    Taken on the evening of 27 December 2017 . Equipment: Skywatcher Esprit 80 with the ZWO 1600MM-C using an Astrodon 3nm Ha filter mounted on the AVX with OAG guiding Exposures: 23 x 240s Ha at unity gain Processed: Pixinsight
  6. I'd recommend looking at the new PhotometricColorCalibration process as an alternative and an improvement to BackgroundNeutralization and ColorCalibration. It calibrates the image to that the background is neutral and the stars match their BV values. I've found this gives much better starting points for star colour. You could also try the ArcsinhStretch process for at least some part of the stretching as it is better at retaining colour. I find it bloats the stars so I use it in combination with one or more of MaskedStretch, HistogramTransformation and CurvesTransformation. I'd recommend the following order whilst linear: DynamicCrop -> DynamicBackgroundExtraction (sometimes twice) -> ChannelCombination -> PhotometricColorCalibration -> (if necessary, and not always at 100%) SCNR:Green
  7. A good start. You’ve just started to capture the nebula. Did you take flats? If not, they will help. You should also make sure to crop any stacking artefacts before processing. I can see a little has been left at the bottom and that can make it harder to remove gradients, etc. But welcome to the wonderful world of imaging
  8. From the album: Ken's images

    Taken on the evening of 18 and 19 November 2017 . Equipment: Skywatcher Esprit 80 with the ZWO 1600MM-C using an Astrodon 3nm Ha filter and ZWO RGB filters mounted on the AVX with OAG guiding Exposures: 40 x 240s Ha, 56 x 120s of L and 20 x 120s each of RGB all at unity gain Processed: Pixinsight and Photoshop
  9. From the album: Ken's images

    Taken on the evening of 18 and 19 November 2017 . Equipment: Skywatcher Esprit 80 with the ZWO 1600MM-C using an Astrodon 3nm Ha filter and ZWO RGB filters mounted on the AVX with OAG guiding Exposures: 40 x 240s Ha and 20 x 120s each of RGB all at unity gain Processed: Pixinsight
  10. Mine are just a few sheets of white paper placed over the end of scope and pointed towards clear blue sky. Nothing simpler!
  11. Thank you I can’t imagine trying to see the Bubble visually. It’s barely visible in a 2 minute L sub though it’s a lot clearer with the Ha filter. I’d need much darker skies.
  12. You will be surprised at what you can achieve. You’ve got some good data hiding behind the vignette and background gradients. I can’t recommend taking flats enough. They will make the single biggest impact on the images you’ve posted today. After that, a good gradient remover will also make a difference.
  13. Lovely target to start on. Ha is wonderful. Tight stars, minimal gradient and you can see results for each sub because it’s so easy to stretch. The processing fun starts when you combine it with RGB.
  14. From the album: Ken's images

    Taken on the evening of 11 November 2017 . Equipment: Skywatcher Esprit 80 with the ZWO 1600MM-C using an Astrodon 3nm Ha filter mounted on the AVX with OAG guiding Exposures: 20 x 240s at unity gain Processed: Pixinsight and Photoshop

    © (c) Ken Dearman 2017

  15. From the album: Ken's images

    Taken on the evening of 10 November 2017 . Equipment: Skywatcher Esprit 80 with the ZWO 1600MM-C using an Astrodon 3nm Ha filter mounted on the AVX with OAG guiding Exposures: 18 x 240s and 20 x 240s at unity gain Processed: Pixinsight and Photoshop

    © (c) Ken Dearman 2017

  16. From the album: Ken's images

    Taken on the evening of 10 November 2017 . Equipment: Skywatcher Esprit 80 with the ZWO 1600MM-C using an Astrodon 3nm Ha filter mounted on the AVX with OAG guiding Exposures: 18 x 240s and 20 x 240s at unity gain Processed: Pixinsight and Photoshop

    © (c) Ken Dearman 2017

  17. Found this in an earlier post on the same topic: So QSI are saying the effect of a filter means the distance inside the camera reduces by 1/3 the thickness of the filter. So to maintain the original spacing you have to add that space outside the camera. We would never really talk about the optical length of light inside the camera itself so we simplify this to just what we have control over, how much space we have between the reducer and front face of the camera. We add space to compensate for the effect of the filter. I think it's sangria time...
  18. By copying and pasting you've "chopped" off some of the light path inside the spacer. In reality, the only way to shorten the light path is to pass it through a lens. You cannot move the light path without adding a lens. So in the lower example, the addition of the filter does alter the light path by extending its point of focus. That point is now fixed in space in relation to the rear of the reducer. You could have placed the filter anywhere between the reducer and the chip and its effect would be the same (though the filter's diameter would obviously need to be bigger the further it was from the chip). So with a fixed point of focus there are only two ways to bring the system to focus: 1) move the chip so it is now positioned at that point (by adding more space between it and the reducer) 2) move the point of focus to where the chip is by placing an additional lens in the system that would reduce the light path by an equal and opposite amount as the filter. Option 1 is much more prefereable as it does not introduce any further glass/air interfaces. [Edit: I'm in Spain atm so can't draw what I mean but if you move your diagram upwards so it is inline with the original you will see your light path and the original light path no longer align. This is the missing piece of the light path that cannot be accounted for simply through a spacer.]
  19. You’re trying to move the light to reach the chip by changing the spacing but the light path cannot change (not without adding lenses). You have to build more space to move the chip to where the light now reaches focus.
  20. It always felt counter-intuitive to me, but your diagram shows you need to add length to the spacer for the light path to hit the chip and be in focus. If you shorten the spacer, it would push the focus point even further behind the chip. And that is how it worked for me in practice. I need 66mm without a filter and with the Astrodon (3mm) filters I need 67mm of space.
  21. Now that's that I call red shift! I had a go with your jpegs using PixInsight to blend luminance from one and RGB from the other plus I used it's new colour calibration tool that uses the stars actual BV to calibrate colour across the image. Hope you don't mind!
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.