Jump to content


M33 - Reprocess using a couple of new PI tools


Recommended Posts

Back in October 2016, I did a short-ish M33 - 1 hour each of RGB, 2 hours of Lum and a paltry 80 mins of Ha.  The thread can be read here: https://stargazerslounge.com/topic/278949-m33-with-super-luminance/?tab=comments#comment-3054617

Whilst I was on holdiay, a new version of PI was released.  This contained a new process called PhotometricColorCalibration.  The idea, as best as I can understand it, is: plate solve your image; compare it with a colour image from a 'reference' catalogue; and adjust the colour calibration to try to match that reference.  The reference catalogue used is  the AAVSO Photometric All-Sky Survey or APASS.  There has been some debate on the PI forum about whether or not it might be better to use a different catalogue.  However, I only skim read this - I can only get a few posts into a typical PI forum thread without wanting to poke sharpened matchsticks into my tender areas.  Anyhow, I thought I'd look for an RGB image to try out this new process, and M33 came up.  

I was also alerted, by the one and only Barry Wilson, to another process called 'LocalNormalization' .  I don't know if this is a brand new process in the latest release, or if it has been around for a while but I hadn't noticed it before.  Barry gave me a brief run down on how to use it.  You choose your best sub as a reference and then run the process on all of your other subs, just prior to ImageIntegration (ie after Star Alignment) - or at least that is what I did.  LocalNorm files are produced and you add these to the ImageIntegration files (a bit like you add the Drizzle files).  Barry suggested that this process might reduce the demands on (or even the need for) DBE - and certainly the stacks did seem reasonably 'flat'.  

I should warn that I have only a vague idea of what I am doing with these processes, and it may well be that I am using them incorrectly.  Please consult with a PI Expert (or 'Sensei', as I believe they prefer to be called) before trying these things for yourselves. 

The new version does seem less 'muddy' to my eyes.  I have a vague recollection of seeing some M33 images with nice 'ruddy' cores, and in the earlier version I was probabaly chasing that.  I am still trying to aim for subtle ....... largely in an attempt to compensate for my personality, which is anything but.  

This is my Esprit 120; QSI 690 on the Mesu 200: 6 x 600s each of RGB, 24 x 300s of Lum and 4 x 1200s of Ha.  If I was doing this again I would probabaly try to double-down (at the very least) on everything:


Edited by gnomus
  • Like 6
  • Thanks 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

  • Recently Browsing   0 members

    • No registered users viewing this page.
  • Similar Content

    • By Astro_mark_c
      So in Pixinsight I tried out using WBPP, i put my lights, darks, flats and flat darks through the script and the attach image came out with this white vignetting.  I've never used WBPP before so wouldn't know the first place to look in the settings to fix this.  I'm gonna hazard a guess and say it's something to do with my flats.  I use a redcat and 183 mc pro astro camera. 
      Any help appreciated. 

    • By Ollyb
      15 hours Integration
      Redcat 51
      AZ EQ6R Pro
      ASI1600mm Pro
      Processed in DSS, Pixinsight & Photoshop 

    • By bottletopburly
      Startools 1.8 is  currently under development, Ivo is  currently working a Narrowband Accent" module  for duo band users , initial image Ivo has posted certainly looks interesting https://forum.startools.org/viewtopic.php?f=4&t=2225&start=10 Ivo also working on a new deconvolution algorithm so some good things for Startools users to look forward too .
    • By Lee_P
      Hi SGL Hive Mind,
      I’ve got a real head-scratcher of a problem, and I’m hoping someone here can help me solve it. I’ve been experimenting with seeing the effects of increasing integration time on background noise levels. My understanding is that the greater the total integration time, the smoother the background noise should appear. But I’m finding that beyond one hour of integration, my noise levels see no improvement, and even maintain the same general structure.
      I flagged this in another thread but think it deserves its own thread, so I thought I’d begin anew.
      I figure either my understanding of integration and noise is incorrect, or maybe I’ve messed up something in pre-processing. I’ve conducted a lot of tests with different settings, copied below, but nothing seems to make much difference. I’ve uploaded my data to GDrive, in case anyone’s feeling generous with their time, and would care to see if they get the repeated noise pattern!  (Being GDrive, I think you need to be logged into a Google account to access).
      My telescope is an Askar FRA400, and the camera is a 2600MC-Pro. All a series of 120-second images shot from Bortle 8 skies. For each test, I applied some basic functions in PixInsight just to get images to compare: ABE, ColorCalibration, EZ Stretch, Rescale to 1000px. I used SCNR to remove green from the first tests, but forgot that step for the second batch.
      Any idea what's going on? Why isn't the noise smoothing out past the one hour mark?

      Here are my PixInsight ImageIntegration settings:

    • By Lee_P
      Hi SGLers,
      I’m hoping a PixInsight guru can help me. I’m a PI beginner, but am having fun learning. My question is about the level of noise in my images. After integrating and performing an STF stretch, the resulting image always looks quite smooth. But it doesn’t take long at all – just a DBE really, maybe then a gentle stretch – for the image to become really noisy. And then a lot of my editing is centred on battling that noise. My camera is an ASI2600MC-Pro, which I cool to -10. For a recent experiment, I gathered 20 hours of data from 120s subs. With that much integration time, and the low-noise camera, I was hoping for lower noise than I actually got. (I am shooting from Bortle 8, however).
      So my question is: are my expectations wrong, and actually the amount of noise I have is what’s to be expected? Or, have I messed something up in pre-processing or integration?
      In case it’s useful, I ran SCRIPT -> ImageAnalysis -> Noise Evaluation on the image straight out of integration and got the following:
      Ch | noise | count(%) | layers |
      0 | 2.626e-01 | 18.39 | 4 |
      1 | 1.037e-01 | 12.01 | 4 |
      2 | 1.636e-01 | 11.10 | 4 |
      I’ve also uploaded the file (1.16Gb) for anyone kind enough to help investigate further:
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.