Jump to content

SkySurveyBanner.jpg.21855908fce40597655603b6c9af720d.jpg

feilimb

Members
  • Posts

    162
  • Joined

  • Last visited

Reputation

71 Excellent

Contact Methods

  • Website URL
    http://ronnach.wordpress.com

Profile Information

  • Gender
    Male
  • Interests
    Angling, Software, Astronomy, Film
  • Location
    Cork, Ireland

Recent Profile Visitors

1,311 profile views
  1. Guiding is required if you wish to shoot 'long' light frame exposures, and do not wish to have either star trails or egg shaped stars in the lights. Guiding refers to the use of a smaller telescope of short focal length (eg. 100-200mm) mounted on the main telescope (although there are other ways to achieve it - via the use of an 'off axis guider'). With such a setup, the main telescope is mounted on an equatorial mount, and is 'tracking' the stars - but such tracking is never perfect, and it is the job of the guide scope (and associated software - which is not DSS) to 'nudge' the mount 'a little this way, and a little that way' when it needs to keep a star in the same location on the image frame. If you shoot very short exposures, it is sometimes possible to avoid the need for guiding altogether. I am assuming you are or will be using a telescope for imaging, but perhaps you are just using a camera with a lens of relatively short focal length (50-150mm) ?
  2. Note: you do not need to take your flat exposures during the shoot (ie. the night of the light frame exposures). After shooting a sequence of light frames, and at end of a session - you can simply carry in your telescope / lens with camera still attached, and ensure that you do not move the camera from it's orientation, and do not change the focus of the telescope / lens. Then you can eg. go to bed, and return to shooting flat frames the following day (or even a eg. few days later) when you have time.
  3. I captured about 3 hours of RGB data last Tuesday night, with an IDAS P2 LPS filter and no moon around. This was my first crack at combining RGB with the above data (H-Alpha using colour CMOS), I'm not fully happy with the process I used for combining and I seem to have brought some (probably light pollution) reddish colour into the background sky (see especially upper half of image) . Also, despite boosting the colour saturation in the RGB image the colour in the final combination just seems a bit inspid and lacking any punch. I think I need to learn some more about masks too... Anyhow here it is (about 3 hours of 5min subs in H-Alpha, and 3 hours of 5 min subs with a LP filter combined):
  4. I am hoping to capture rgb this week (clear sky forecast for Tuesday) with a LP filter. This will be my first attempt to then combine this 'L' channel with RGB so I have a lot to learn.. When you say stretch the 'original image' are you referring to the H-alpha capture (stack), or a stack of RGB data?
  5. Thanks tooth, I haven't tried APP yet but I will try the trial version. In my case the G channel also did have quite a bit of useful data as well as the R channel, not sure if it would allow for custom channel selection? Many thanks wimvb, this is the best version I've seen yet in terms of the detail pulled out. Any chance you could outline the high level steps in your process? I had another 45 minutes of data from a pre meridian flip that I incorporated last night, and did one more process attempt with that. It was much the same as my previous effort, perhaps just a little better snr (borders added using Crop process in PI):
  6. Thanks Nerf for having a crack there with a good chunk of detail. I have had another crack at it this morning, trying to follow @vlaiv suggestions and render the final image in monochrome. I split each sub into R,G & B and I threw away all the B's which seemed to contribute nothing. I then split my previous master dark into R, G and B (also throwing away the 'B'). Then I stacked all the R,G component subs, along with the R,G component master dark in DSS and chose 'Entropy Weight Average' for the stacking algorithm (although I'm not sure if this is what vlaiv was suggesting with 'weighing each sub based on its SNR'). The processing I followed was similar to in my opening post, except without background neutralisation/color calibration, and performing the CurvesTransformation on 'RGB/K'. Also I performed a light 'Deconvolution'. I'm pretty happy with this 2nd result in comparison to my first effort. No noise reduction algorithm was applied.
  7. Nice one cheers Nerf, will be interesting to see what you can pull out. Thanks vlaiv for the extra detail, it does make some sense but I'm still trying to digest it all!
  8. Thanks vlaiv, when you say separate the colours - do you mean prior to stacking, to extract the RGB channels into 3 different images for each light frame, and then to stack each channel separately .. or am I mis-understanding this.
  9. Thanks yeah I actually have an LPS filter (IDAS LPS P2), but on the night I took these images there was a big moon out - and unless I am mistaken I thought the moon would wash out the signal with an LPS filter. I would like to get some more data on the target though with the LPS another time and try to figure out how I can combine it with the IR data.
  10. Sorry I should have said in the first post! It was a Skywatcher ED80 DS Pro, with 0.85x focal reducer - so 510mm @ F/6.37
  11. Hi all, I have relatively basic image processing skills with PixInsight and am wondering if this is why I am not getting more out of my data on the Veil Nebula, or whether there just is not a whole lot more achievable with the data. I captured 36 x 5min lights @ Gain 200, Bin 1x1 on an ASI294MC Pro, and with a H-Alpha filter in front of the sensor. I live in a Bortle 7/8 area and there was a big moon on the night also. 30 darks were also used in the stack, I did capture flats but they appeared to make the image worse so I left them out. Here is an attachment of the stack, performed with DSS, lights & darks only as a TIFF file: Veil_LightsDarksOnly.TIF Here is what I got out of the data: Process I followed was roughly: 1. Stacking in DSS (no alignment of RGB channels) 2. Dynamic crop in (PixInsight) 3. RGB channel extraction, LinearFit on R channel, then RGB re-combination 4. Backround neutralisation 5. Color calibration 6. SCNR green removal 7. Automatic background extraction 8. Histogram stretch 9. Curves process used to increase saturation a bit The main thing I'm trying to get a grasp of here is whether my image processing or stacking process is missing some steps, or whether there can't be much more expected from the data that I have in this case?
  12. Hi all, I dusted off the ASI294MC Pro last week and on a clear night with a 90%+ moon I captured 36x5min exposures of the Veil Nebula, with Skywatcher ED80 and 0.85x focal reducer and an Astronomik HA filter (which may be slightly dodgy). I also captured 30 darks which were used in calibration. I tried capturing flats and dark flats but I still cannot crack the dark arts of flats, and when I use them in a stack I get a horrible ring around the centre of the image from some kind of over correction. My image processing skills are not great so perhaps someone else would like to have a try with the data too (stacked with DSS). Any pointers greatly appreciated, there is still a lot of noise in the stacked image I had hoped there would be less with the number of lights and darks obtained? Note #1: I extracted the RGB channels, then used LinearFit with the red channel as reference, and then recombined with LRGB combination for a basic colour calibration. Note #2: I'm pretty sure the halos around the stars are induced by the filter. I have seen the same halos effect when using the same filter on different targets (it is a filter I got for free from a kind fellow forum member just to get me started with HA captures, but he noted that there may have been some issue with the filter). Veil_LightsDarksOnly.TIF
  13. Could the amp glow on both sides be due to a meridian flip performed during imaging session? By the way I also have the ASI294MC Pro, and ever since I got it I have never managed satisfactory flats with it. When I stack images, the result with flats always appears to be worse than the result without any flats applied. Can I ask you what your method is for capturing flats? I am using a DIY panel as per in this thread: https://stargazerslounge.com/topic/325457-flatstub-a-rough-ready-diy-flats-solution/
  14. The 'flat calibration module' in Ekos has been buggy as hell for me and I would not recommend it. It often used to come back with a calculated ADU value that it reported as eg. '5000.9999999999999999' (lots of 9's after decimal point) and after that stage it would never recover. Also: note that if you take flats manually in AV mode, note that they will be CR2 files on the SD card - if you have captured lights and darks using Ekos as FITS files then some stackers like Deep Sky Stacker will not allow you to stack a mixture of file types (ie. it will not allow some frames in FITS and other in CR2), so as suggested above it is probably best to use AV mode just to find out exposure length, and then set that manually in Ekos to capture the actual flats.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.