Jump to content

Stargazers Lounge Uses Cookies

Like most websites, SGL uses cookies in order to deliver a secure, personalised service, to provide social media functions and to analyse our traffic. Continued use of SGL indicates your acceptance of our cookie policy.

sgl_imaging_challenge_banner_globular_clusters_winners.thumb.jpg.13b743f39f721323cb5d76f07724c489.jpg

alan4908

Members
  • Content Count

    1,059
  • Joined

  • Last visited

  • Days Won

    5

Everything posted by alan4908

  1. A narrow band close up of NGC 6995 with Ha mapped to red and OIII mapped to blue. An artificial green was generated by Carboni's PS actions. I decided to attempt realistic looking colours by calibrating the image via Pixinsight's Photometric Colour Calibration tool. The result of a 30 hour integration is shown below and was taken with my Esprit 150. Apart from the interesting detail, I found if you stare at the image you can see a variety of shapes - so far I've spotted a large horse (red), an owl (blue) and deer skull (blue). What you can you see ? Alan LIGHTS: Ha:40, OIII: 20 x 1800s, DARKS:30, BIAS:100, FLATS:40 all at -20C.
  2. alan4908

    Deep Sky III

    Images taken with a Trius 814 and a Esprit 150
  3. alan4908

    Cave Nebula (SH2-155)

    From the album: Deep Sky III

    My first attempt at the Cave nebula (SH2-155). The LRGB image below has an Ha blend into the L and Red channels and was taken with my Esprit 150. I decided to decrease the vibrancy of the colours from the original image posted.
  4. Thanks for the comment Olly. I was quite surprised about how much blue, presumably emanating from the blue reflection nebula, the object contained. In the renditions I have seen, I could only detect this weakly, so here I was attempting to emphasize its presence. Overall, a very interesting object consisting of the emission, reflection and dark nebula - as you have pointed out above, the latter is very dark indeed. Alan
  5. Due to the poor UK weather I hadn't been paying much attention to how my automated imaging set up was progressing with this target. So, I was a little surprised when I discovered that it had been very slowly capturing data of this target for the past 6 months........Anyway, here's the result of 32 hours integration time of my first attempt at the Cave nebula (SH2-155). The LRGB image below has an Ha blend into the L and Red channels and was taken with my Esprit 150. Alan L: 20, R:13, G:23, B:15, x600s, Ha: 41x1800s, DARKS:30, BIAS:100, FLATS:40 all at -20C.
  6. If you put the jpeg images through CCD Inspector then they clearly point to the issue being some form of camera tilt. The first image indicates a very large tilt whilst in the second one the tilt is much reduced. However, the problem with using CCDInspector on stacked jpeg images is that it can give very misleading results. So, in order to eliminate focuser droop and minimize the influence of tracking/guiding errors, I'd suggest you take 6 single subs with a very short exposure with the scope pointing vertically upwards and then run CCD inspector on the 16 bit subs in averaging mode. I'd then repeat the test with the scope pointing towards the horizontal (this will stress out all the components). Using this methodology you should be able to work out if anything is creating additional camera tilt when the scope in pointed in different directions. I've found that to minimize camera tilt you really need to have an all screw connection set up. Alan
  7. Thanks for the comment Dave. As I mentioned above, I was really just using one of my finalized imaged from my album (Deep Sky III if you are interested) to demonstrate the technique, rather than attempt to improve the image. However, I think it is quite interesting and gives a different effect to that obtained by star reduction. Previously, I have used PS based techniques such as the PS minimum filter and the PS plug in star shrink. As to the end result, I totally agree - it is all a matter of personal taste. Alan
  8. Yes, I agree it's an interesting technique. I don't see why you could not do a similar thing in PS (although I haven't actually tried). As far as I can see all you need to do is end up with a contour mask which targets the star halos but protects the stars and a substitute pixel image which is used for the halo de-emphasis - I presume this could be generated via a context aware fill (rather than the PI MMT operation). A context aware fill is likely to give a more accurate representation of the background replacement pixels since it would automatically mirror the background noise. Personally, I'm not sure I prefer the second image or not - I really was just using the image to demonstrate the technique and to improve my PI skills Alan Thanks ! Alan
  9. OK - one thing you could also try to minimize any transient effects is to use the local normalization stacking option in PI. It's a way of minimizing large scale transient effects provided that they only occur in certain subs. To do this you: 1. First create a reference frame for each channel by the normal stacking method. Here, choose only those subs which do not contain any transient effects eg high clouds. 2. Use the Local Normalization process to create local normalization files and use the reference frame that you've just created. 3. Stack all the subs in a particular channel, add the local normalization files and change the normalization stacking option to local normalization. Alan
  10. I'd guess that the green (and purple) background colours are most likely caused by light pollution. Do you image from a light polluted area or when you moon was up ? Anyway, you can easily remove them in post processing. The best tool to do this is Pixinsight's Dynamic Background Extraction process. If you process in PS then I'd recommend using the Gradient Exterminator plug-in. An alternative to reducing green in images is to use the Pixinsight process SCNR (green) or in PS the plug in HLVG. Alan
  11. I noticed that Adam Block has published a written tutorial on a new Pixinsight star de-emphasis technique - see https://adamblockstudios.com/articles/star_demph It targets star halos rather than stars themselves, so it doesn't use an erosion filter. It doesn't work well on bright stars so you have to exclude them from your processing. The technique is also claimed to introduce fewer artifacts into the finished result than traditional erosion based techniques. So, with a desire to increase my PI processing skills, I had a go with it on one of my own dense star field images and I was quite impressed. Alan Original De-emphasised stars Blink GIF
  12. alan4908

    Deep Sky II

    Taken with a SW ED80 and a SX Trius 814
  13. alan4908

    M33 (reprocessed)

    From the album: Deep Sky II

    I've often wondered how my post processing skills have changed over the years, so I decided to find out by extracting some data which I acquired c2.5 years ago and performing a reprocess on M33. The LRGB image with an Ha blend into the red channel represents just over 15 hours and was taken with my ED80 on my NEQ6. The main post processing differences are: - I corrected for a slight camera tilt via PS, this results in slightly oddly shaped stars towards the edges of the frame. You should be able to see theses defects in the old image if you zoom in. - PI's Photometric Colour Calibration was used on the new image. - PI's HDMRT was used on the Lum to increase the contrast (before I used the PS High Pass Filter) - PI's Dark Script Enhance was used to enhance the galaxies dust lanes. So, still a mixture of PS and PI but now with more PI. Alan (My original result from 2.5 years ago is in the album Deep Sky II).
  14. Hi Gorann When I was imaging manually, I noticed that my ED80 focuser was prone to both sag and slippage hence my decision to upgrade to a Moonlite when I decided to implement an automated set up. When I purchased the Esprit 150, I decided to also purchase the Feathertouch upgrade, simply because of my previous experience. However, the Esprit 150 focuser does seem a much higher quality than the focuser which came with my ED80. FYI I know that @steppenwolf who has also implemented an autonomous imaging setup also took the decision to upgrade to a Feathertouch focuser on his Esprit 150. Alan
  15. I upgraded my SW ED80 with the Moonlite focuser and my SW Esprit 150 with a Feathertouch. Both are very good but I would say that the Feathertouch is better, although more expensive. One item to watch for (if you are into imaging) is to ensure that your focuser supports an all screw connection, this is important for minimizing camera tilt. Alan
  16. Thanks for the comment Lauren. The main post processing differences are: - I corrected for a slight camera tilt via PS, this results in slightly oddly shaped stars towards the edges of the frame. You should be able to see theses defects in the old image if you zoom in. - PI's Photometric Colour Calibration was used on the new image. - PI's HDMRT was used on the Lum to increase the contrast (before I used the PS High Pass Filter) - PI's Dark Script Enhance was used to enhance the galaxies dust lanes. So, still a mixture of PS and PI but now with more PI. Alan
  17. I've often wondered how my post processing skills have changed over the years, so I decided to find out by extracting some data which I acquired c2.5 years ago and performing a reprocess on M33. The LRGB image with an Ha blend into the red channel represents just over 15 hours and was taken with my ED80 on my NEQ6. (If you want to have a look at my original result from 2.5 years ago have a look in my album Deep Sky II). Alan LIGHTS: L:15, R:9, G:10, B: 12 x 600s; H: 15 x 1800s. DARKS:30; BIAS:100; FLATS:40 all at -20C.
  18. alan4908

    NGC3294

    Thanks for the comment Des Alan
  19. alan4908

    NGC3294

    From the album: Deep Sky III

    I haven't come across this rather small and distant galaxy (NGC3294), so I thought I would give it a go ! At a distance of about 98 million light years, NGC3294 is located in the constellation Leo Minor. From Earth it occupies c2 x 1 arc minutes of sky, so its apparent size is quite small. It's a spiral galaxy with a slight bar at its center.. The LRGB image below represent 16 hours integration time and was taken with my Esprit 150.
  20. Software and camera hardware binning are different, however, from the perspective of trying to improve guiding errors they are equivalent. Binning does have its place in imaging eg increasing the signal to noise ratio for faint objects or for rapid image downloads in real time plate solving but using binning for fixing guiding issues isn't one of them. From an image perspective, indiscriminate binning means that you will be throwing away resolution and you will also be making the stars squarer. Your camera will also start to clip bright objects (eg stars) more frequently which is information that can never be recovered. If you do go ahead with the belt modification, I think you will be see improvements to your guiding and general operation of your mount. If you still end up with non-ideal aspect ratios for your stars, then I'd suggest you simply try to reduce their impact in post processing. Alan
  21. If you bin 2x2 then your effective image scale will be 1.86 arc seconds per pixel. However, I'd be careful of using software binning as a methodology of improving your guiding performance. This is because the original guiding error will still be present in the pre-binned image, all you have done is to reduce the overall resolution in an attempt to hide the error. This technique is equivalent to taking an image, measuring the average star aspect ratio, binning the image 2x2 and then re-measuring the aspect ratio. If I do this for one of my own images, imaged at 0.7 arc seconds/pixel, then CCD Inspector gives me an aspect ratio of 8 for both images - eg it does not help. I don't believe an OAG would help. Alan
  22. If you want to image at 0.93 arc seconds/pix then as as rough guide you need your rms guide corrections to be consistently less than half of this value eg better than about 0.46 arc seconds. If they are more then they will impact the shape and size and stars. The information above suggests that even with a belt modification you are going to struggle to achieve satisfactory results. Alan
  23. Assuming that you have stretched both the Ha and Red channel, then you first need to blackclip the Ha slightly, this will minimize the impact of a red cast and colour imbalance. In terms of blending, I normally use the PS Screen Blending mode but I only blend a fraction of the Ha (eg 10 - 50%) depending on the target. For some targets, I also use the PS lighten blending mode. When you've decided on what percentage you want to combine at I'd also suggest you check the composite blended image for any clipping, eg excessive red - just roll the information pointer around and look for anything close to 255. If you find something near this value I'd suggest reducing the Ha in this region via a mask. Alan
  24. From my experience of the NEQ6, I found it was quite happy to guide at around 1.4 arc seconds/pix producing subs with 30min duration without issue. However, after spending many hours adjusting various guiding parameters, I got the impression that I was on the limit of the mounts performance capabilities. At 0.93 arc seconds/pix I think you will struggle to obtain consistent results due to the intrinsic tracking limitations of the mount, even if you are well inside the imaging weight limits of the mount. Alan
  25. Well on my very limited imaging with the Moon......but a little more with DSO's - I've found that the moon will be in focus provided that you use a stellar source and you autofocus with care. What I've found is: 1. It is best to choose a focus star near the zenith - this is to reduce the impact of seeing. Seeing introduces uncertainty into the autofocus operation and so you need to try to minimize this. The larger the air mass, the more uncertainty you will create. 2. The focus star needs to be selected such that it is exposed by your camera to be on the linear part of the your camera's response - if you pick overexposed stars you can get misleading focus results. 3. I also found that some form of automated focus convergence is very useful to minimize the effect of seeing. I found that the free version of FocusMax performs the focusing very well, however, I use ACP to find the focus star. Alan
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.