Jump to content


M33 - Reprocess using a couple of new PI tools

Recommended Posts

Back in October 2016, I did a short-ish M33 - 1 hour each of RGB, 2 hours of Lum and a paltry 80 mins of Ha.  The thread can be read here: https://stargazerslounge.com/topic/278949-m33-with-super-luminance/?tab=comments#comment-3054617

Whilst I was on holdiay, a new version of PI was released.  This contained a new process called PhotometricColorCalibration.  The idea, as best as I can understand it, is: plate solve your image; compare it with a colour image from a 'reference' catalogue; and adjust the colour calibration to try to match that reference.  The reference catalogue used is  the AAVSO Photometric All-Sky Survey or APASS.  There has been some debate on the PI forum about whether or not it might be better to use a different catalogue.  However, I only skim read this - I can only get a few posts into a typical PI forum thread without wanting to poke sharpened matchsticks into my tender areas.  Anyhow, I thought I'd look for an RGB image to try out this new process, and M33 came up.  

I was also alerted, by the one and only Barry Wilson, to another process called 'LocalNormalization' .  I don't know if this is a brand new process in the latest release, or if it has been around for a while but I hadn't noticed it before.  Barry gave me a brief run down on how to use it.  You choose your best sub as a reference and then run the process on all of your other subs, just prior to ImageIntegration (ie after Star Alignment) - or at least that is what I did.  LocalNorm files are produced and you add these to the ImageIntegration files (a bit like you add the Drizzle files).  Barry suggested that this process might reduce the demands on (or even the need for) DBE - and certainly the stacks did seem reasonably 'flat'.  

I should warn that I have only a vague idea of what I am doing with these processes, and it may well be that I am using them incorrectly.  Please consult with a PI Expert (or 'Sensei', as I believe they prefer to be called) before trying these things for yourselves. 

The new version does seem less 'muddy' to my eyes.  I have a vague recollection of seeing some M33 images with nice 'ruddy' cores, and in the earlier version I was probabaly chasing that.  I am still trying to aim for subtle ....... largely in an attempt to compensate for my personality, which is anything but.  

This is my Esprit 120; QSI 690 on the Mesu 200: 6 x 600s each of RGB, 24 x 300s of Lum and 4 x 1200s of Ha.  If I was doing this again I would probabaly try to double-down (at the very least) on everything:


Edited by gnomus
  • Like 7
  • Thanks 1
Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    No registered users viewing this page.

  • Similar Content

    • By DaveS
      Finally I have something like First Light with the ODK12. This is just 6 x 600 sec subs Luminance with the Moravian G3 16200 and Chroma filters. Sigma Add stacking in AstroArt 7 followed by a slight crop to get rid of dither edges, Gradient Reduction (As the moon was starting to interfere), DDP and Unsharp Mask.
      Calibration was with Darks and Flats only as I had neglected to do Dark Flats *slapped wrist*.

      Reduced 50% for upload and JPEG. Yes it's a bit noisy, and I think I may have focus issues.
      C&C welcome as usual. 
    • By EMG
      Recently I bought a ZWO ASI178MM for planetary/lunar/deep-sky imaging and last weekend i had clear skies so I was able to capture videos of the moon in two panels. I have already processed the videos in AutoStakkert, I used 3x drizzle because I intend to print the image so now I have two 120MB TIFF images that i would like to combine in a two panel mosaic. I tried doing it in Hugin, a free panorama stitcher but the program crashes due to the file sizes being too large. I have searched for tutorials on using DynamicAlignment in PixInsight but it seems to me that i'll need a reference image to be able to create mosaics in PixInsight. Is there any way to go about this, I feel like this should be a pretty straightforward job but I am not very experienced in PI so I would really appreciate some advice.
      I tried using DynamicAlignment as you can see in the attached screenshot but the result was just a cut version of the target image, aligned perfectly to the source image. It would be perfect if it didn't cut off the lower part of the target image.

    • By PH-R
      M33 captured over 4 nights, 16hrs of data using 300sec subs, darks from library, mixed flats for an average and bias frames. Camera: Canon 600D. Everything put into Deep Sky Stacker and the resulting image processed in PS.
      I used a video made by Scott Rosen on processing to help me process and I learnt some really powerful techniques to use in PS.  I think this is my best processed image to date, so I feel happy I have made some progress. Looking forward to reprocessing my old data armed with the new knowledge 🙂. 

    • By kryptonite
      I'm not experienced with LRGB imaging, so thought i'd give it a go on M81. However, when i combine the 4 individually processed integrations i end up with horrible colour hues across the image - they're all aligned and wotnot. Am i running into the issues of light pollution (inside the M25), which i can only remove with aggressive DBE application?  Individual files attached.

  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.