Jump to content

Stargazers Lounge Uses Cookies

Like most websites, SGL uses cookies in order to deliver a secure, personalised service, to provide social media functions and to analyse our traffic. Continued use of SGL indicates your acceptance of our cookie policy.

sgl_imaging_challenge_banner_solar_25_winners.thumb.jpg.fe4e711c64054f3c9486c752d0bcd6f2.jpg

drb1976

Can anyone do anything with this?

Recommended Posts

i'll give it a shot, my processing skills aren't great but the more you practice the better they become.

Share this post


Link to post
Share on other sites

I'm guessing you have a lot of light-pollution; there is a dramatic gradient across the image - someon with Pixinsight would probably be able to do something about it. This is a quick go in photoshop (not that my skills in that are anything to write home about) and it reveals that there is quite a lot of data there. No doubt someone will be able to improve on this, but HTH.

Thanks.

long exposure 1.tif

Share this post


Link to post
Share on other sites

I haven't got it yet - it costs quite a bit and it comes with a 45 day free trial - so I want to build up a nice lot of data files that I can then use to test out all its possibilities before purchase.

That said, I gather from those who use it that the answer would be 'yes'.

There are a load of instruction videos here that will enable you to see some of its features and how powerful it is.

HTH

  • Like 1

Share this post


Link to post
Share on other sites

This is as best as i could get it. If i had gradient xterminator it would look a lot better. 

Processing Practice.jpg

  • Like 1

Share this post


Link to post
Share on other sites

Managed to get this with PS. Removed gradient with DBE in Pixinsight LE. Basically did some curves, removed the gradient, cropped, added star colour with Noels actions, Boosted overall colour with a soft light action, Added detail with the Camera Raw filter, applied noise reduction. I personally like PS, it takes a while to get used to the masks etc but as you build up your skills, you can buy actions (e.g. Noels or Annies actions) and as you find yourself doing the same things over and over, you write your own actions. And its got a great history feature so there is no mistake you make that cant be undone. I find it flexible and creative, but I know a lot like Pixinsight. DBE in pixinsight is great for removing gradients.

long exposure.png

 

  • Like 1

Share this post


Link to post
Share on other sites

Wow, That is amazing !!! I did not realize I had captured that much data in there.... I would love to see your workflow

That is a phenomenal job

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


  • Recently Browsing   0 members

    No registered users viewing this page.

  • Similar Content

    • By jjosefsen
      Hi,
      I'm working with some recent data of a new(ish) camera, and my Ha data is pretty good, but the oiii has these strange artifacts around the borders.
      If I use any kind of local normalization then it pretty much ruins most of the image, as you can see below..
      First the Ha:

       
      Oiii with local normalization:

       
      oiii local normalization map, see how it fits with the artifacts:

       
      Oiii without any local normalization:

       
      These edge artifacts, could they be related to a bad filter or is it a processing artifact, has anyone seen anything similar?
       
      I am using Astro Pixel Processor for calibration and stacking.. Im kind of stumped on this one, and I am going to yank out the Oiii filter this weekend to have a looksie.
    • By Galaxyfaraway
      This is maybe my 3rd attempt at a galaxy and I am trying to figure out the best way of doing it since I live in a red zone of London suburbs.
      I took this over 2 nights (well, 1.5 really, as my guiding wasn't working and plate solver wouldn't comply after meridian flip...) with my 8" EdgeHD SCT with 0.7x reducer and Atik 490X. Around 20 Luminance subs at 10 mins each  (1x1) and around 15 RGB subs at 5 mins each, but binned 2x2. I use Astrodon filters but also have an LP filter permanently in my image train. 
      Question: should I only use RGB and create a synthetic L channel, given LP, or continue trying with the actual luminance? Gradients are horrible with luminance but RGB doesn't have as much detail (only red filter seems to be more sensitive).
      My stars are all over the place (colours pop out everywhere, in the wrong way), how can I control this better?
      Also, as I wrote, the red filter seems to have much more detail than the rest and when I add all the channels into PS, the red colour just overpowers everything (and in general, how can I keep the star colours as they are and not have the red and blue go crazy - I am not sure the name for it, but it looks like chromatic aberration on steroids).
      Any other general tips would be great...
      Thank in advance. GFA
      PS: I cheated with the core: just changed the temperature to make it look a bit more glowy; for some reason, I barely had any yellow colour from the data...I will post stacked images, if of interest.
       
       
    • By hornedreaper33
      Hey all,
      I took some really rough images of M42 the other night, alignment and focus was by eye, heavily light polluted, no calibration frames and I have a dusty corrector plate. However, this was the first time I have shot a deep sky object and for how rough it was I was pleased, see attached.
      I watched a lot of DSS tutorial videos last night and I decided to stack the 16 images I have just to see what I get. Now, I am under no delusions as to the expected quality of the final image however, stacking made all the stars vanish and the overall quality of the image was less that that of a single frame. Any suggestions? I do not have an example image saved, sorry.
      Thank you.

    • By eshy76
      Hi everyone,

      I've been spending some time processing over cloudy christmas and realised the thing I find most daunting, difficult and annoying is creating star masks.  So my question is - is there a way of creating star masks (in Pixinsight preferably, but open to other ways!) which is (a) always accurate (b) relatively quick (c) repeatable?

      I've worked through LVA tutorials and looked at David Ault's technique. I also have the Bracken book to go through. Main techniques seem to be:

      1. Stretch extracted lightness, clip low, bring down highlights, then use Star Mask process - very inconsistent results I get with this approach
      2. Similar to above, but use MMT/MLT to remove nebulosity to create support image, then use different star scale Star Masks to capture all stars and then use pixelmath to put them all onto one image - very time consuming I find, also lots of noise setting fiddling

      I am very interested to see how people go about this and whether there are any neat tips and tricks to help the process! Thanks!

       
    • By coatesg
      Not quite sure what to put in the title here! I've been thinking about my future possible approaches to deep-sky imaging, especially looking at Emil Kraaikamp's approach in taking thousands of 1 sec images. The datasets this will produce are obviously huge - a single frame from my QHY163M is ~31.2MB in size. Even at 5 sec subs, the volumes of data are 22+GB/hr.
      Now, that's fine in theory to process (though it'll take a *long* time to chew through them, even on a 8-core i7!) - sufficient diskspace is practical, and I'd scrub the unreduced files once happy anyhow. However - long term storage is an issue. I couldn't keep all the subs here, even with poor UK weather, you're potentially looking at tens of terabytes of data in relatively short term, worse when backing it all up. 
      Is there a feasible approach to keeping only stacked, reduced data (which is obviously much smaller) whereby you can still add to the data at a later point in time? I was thinking along the lines of:
      Take x hrs - save reduced, stacked data as a single linear frame.
      Later on (possibly years later!), take another y hrs of images - somehow combine with the x hrs before?
      (and repeat as needed)
       
      Could this be achieved and still allowing relevant pixel rejection, weightings, noise reduction, etc? Anyone have a PI workflow that allows this? I guess the same approach applies to standard CCD data, though the data savings are orders of magnitude less...
      Thanks!
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.