Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

PixInsight - newbie on trial licence


Recommended Posts

Leo Triplet dark lights2-1 test cs6.jpg

 

Dave, I have no worries with you using the data, I'm glad you got some practice from it.  The above image was the image I got from stacking the same lights & calibration files in DSS and then tweaked in CS6.  Below was a re-tweak re the gradient but not much better. You can see the gradient though which I still struggled to get rid of.  

The PI, result though is a long way off.  I just don't understand why my image in PI is different to yours with you not getting the vignetting??   I used the same tutorial as yourself and followed it religiously but I included my darks? 

I hope someone can identify what is happening in the calibration stage in PI to affect the final image so much after only just crop & ABE???

Leo Triplet dark lights2 copy.jpg

Link to comment
Share on other sites

  • Replies 63
  • Created
  • Last Reply

Downloading the raw images takes too long. I can have a look at the stacked final master frame, if you want. If you post a link to the stacked,  integrated file, I can process it with PI and get back to you.

Link to comment
Share on other sites

That is one giant image you managed to shoot.

If stars are not undersampled (= look square), drizzle only increases file size, but does not improve the image. You may also have found out that processing such a large image takes ages.

Generally, cameras with large pixel size on short fl scopes can benefit from drizzle. DSLR images with most often much smaller pixels, rarely benefit.

That being said, I did a quick and dirty process

LVA_stackedwoflats.jpg

 

Here are the DBE settings I used. Since the gradient was mostly caused by vignetting, I used division as the correction method

dbe_settings.png

And here's the complete process description. To increase colour saturation, I used a technique called LLRGB, where luminance data is extracted and stretched, while the RGB image is blurred (MultiscaleLinearTransform) and extremely saturated (CurvesTransformation). After that, the luminance is added to the RGB data

process_container.png

Link to comment
Share on other sites

12 hours ago, julian489289 said:

Dave, prior to looking at PI I did use DSS to stack and tweak in PS CS6.  Got a great result from DSS and not too a bad final image after PS.

However, I thought that I would try PI and hoped to get an even better image result but I am somewhat disappointed and reluctant to spend E280.

Can I just ask why you dropped out the darks?  Was it the darks that were causing the vignetting issue above?

I downloaded one dark and stretched it to have a look and it and it was a perfectly dark and flat image no gradient so decided not to bother with them. A number of people have said that you actually don't need darks for DSLR data but I am not sure if thats true.  I don't think the strange vignetting was coming from them.  I did go ahead and buy the licence for PI because of its amazing power.   It does have a steep learning curve but the results others achieve are really excellent.  Having splashed the cash I am going to keep practicing to improve my technique.

Link to comment
Share on other sites

Love it Wim, that's a nice result, love the colours.  Thanks for taking some time to help me.  I take it that a full stretch with STF will blow the image out considerably and that you would never stretch it that much in processing??

I see what they mean about the steep learning curve!  In regards to your processes that you have done, are there any good tutorials that I could watch to see what you are doing?

In regards to the drizzle function, yes it did take a long time to process!  With my DSLR then I take it not to bother? 

Dave,

I have looked through each individual sub but cannot see where the strange vignetting was coming from either?  I am working with old data on my trial with PI, so I am still unsure whether to make such a purchase just yet.  I also have some data for Andromeda, so may have  play with that data before my trial expires.

Thanks again to you guys for your help. 

Link to comment
Share on other sites

The tutorials from Harry Page (harrysastroshed.com, he's a member here) and Kayron Mercieca (Lightvortexastronomy.com) are a good place to start. I also liked Richard Blochs video tutorials (google: pixinsight tutorial richard bloch). If When you decide to get the full licence, you'll also want to buy Warren Kellers book Inside PixInsight and eventtually dive into Alejandro Tombolinis tutorials. These are more advanced.

Cheers,

Link to comment
Share on other sites

On 17/10/2016 at 20:18, julian489289 said:

so may have  play with that data before my trial expires

If it may help, I downloaded and looked at all your raw data to try and see where the problems lay.

First thing, I noticed that the raw data was gathered over a long time frame, lights and flats from Feb 21st 2015, short darks from Oct 2014 and long darks from Nov 2015 which will make your post processing with a DSLR more difficult. With a DSLR it is important that the ambient temperature of the camera is as close as possible for all the frames taken so that a better match for sensor performance is obtained. You can take darks, flats, bias separately from the lights as long as the camera temperature is similar and a closer date range is going to be a better match to take into account changes to the camera sensor over time.

Flats are overcompensating, they are a little overexposed but the main problem seems to be with whatever light source you used to create the flats, if you used a laptop screen or iPad etc these produce a polarised light which can introduce gradients of their own due to interactions with the anti-aliasing filter on the DSLR sensor, easy to correct by adding a sheet or two of plain white printer paper in front of the screen which will depolarise the light and allow you to increase the exposure time as well which helps reduce the chance of strobing effects creating dark bars in the flats. If you do use paper as an absorber you need to move the paper a little between each flat exposure to even out any density variations. If you just used the white-tee-shirt over the front of the telescope method to create your flats then possibly the light source was not very even, when using background sky dawn or dusk flats the telescope needs to be pointing around 45 deg altitude in the opposite direction to the sun while the sun is below the horizon.

In Pixinsight the default setting for CR2 raws is that PI will observe the camera recorded orientation of the images and apply that orientation in pre-processing so first step is to disable this function and force PI to always read the CR2 files in sensor-pixel order and not auto-rotate the image. Go to the Format Explorer Tab on the left side of the desktop, select the DSLR_RAW option, click on edit preferences and tick the box "No image flip". This will mean you can leave the camera setting to "Autorotate" for daytime terrestrial imaging and not have to remember to switch autorotate off for astro imaging.

 

Looking at your lights only, without any calibration, just aligned and combined you can see what appears to be a rectangular shadow outline within the main image, this looks like a reflection of the DSLR sensor, reflecting back back from an optical surface in the light path, possibly a flattener. It might also possibly be due to light leaking into the viewfinder window during your captures, the camera should either have been supplied with a clip-on mask to fit over the finder window, or the camera has a lever next to the finder window that drops a mask internally, either way make sure the finder window is masked during captures to prevent stray light getting inside the camera.

 

You can see the same rectangular shadow (very faintly) in your flats:

 

 

After full calibration with flats and before DBE:

 

You can see the rectangular shadow is not fully corrected during calibration. These shadows make using DBE a little more complicated because they are not a smooth gradient which can be easily removed with just a few sample points, the shadows are actually a structure that needs to be sampled and removed so in this case DBE was set up to use 50 sample points per row and then all the points overlapping real galaxy and star structure were carefully moved away and a standard DBE subtraction carried out. This leaves a smooth image background with most of the background sensor structure removed:

 

 

And this is the reject mask generated:

 

From this point on you can take the image through the various steps that Wim laid out above. Wim began with the "No flats" and drizzle integrate source image and this work though has been with the "full" calibration set even though it is compromised by the lack of coherent flats and dedicated bias but hopefully it will give you a idea of how to proceed when you do have a really good source data set to work with. 

A very simple noise reduction, stretch and colour saturation adjustment produced this image, Wim's version used rather more post DBE steps and I didn't push the colour saturation too far because the galaxy core was becoming a little too blue but with suitable masking there is a lot more in your image to reveal:

 

HTH

William.

 

Link to comment
Share on other sites

  • 2 weeks later...

William thanks for the time and effort you have spent in analysing and re working my data.  Love the final result as well, i can see where experience excels in PI.

I have over time built a dark library trying to save time on the night maximising my light exposure time.  I suppose 6 months is just about the limit for library darks?  I have recently read though that darks are not really needed if I dither my light frames, is this correct?

For my lights, I use mainly a laptop screen using a program "Al's Virtual Light Box" usually I set it at 75  (Range 0-255) and take flats at 1/20 or 1/30 or else my Ipad.   You advise that the flats are over exposed, what would you recommend to be correctly exposed?  By chance, I was out last night having a go at M33 and took flats at 1/20 or 1/30 but again maybe a little over exposed then?

Screenshot (556).pngScreenshot (557).png

Should I try to achieve the histogram peak more towards 50%?  Will try the paper as well.

I am not sure about the reflection of the DSLR sensor, reflecting back from an optical surface in the light path, as I do not use a flattener and have nothing between the DSLR & OTA in the light train.  In relation to the possible light leaking into the viewfinder window during my captures, I usually use the camera strap clip over the view finder unless on this occasion I forgot??   Though I did remember last night, he he!!

I will have a go with my new data from last night in PI whilst I still have my trial period.

Thanks again, you help and advice is greatly appreciated.

 

Link to comment
Share on other sites

How to determine the correct setting in Al's Virtual Light Box for DSLR flats:

For the flats, first take a RAW dark frame using the same shutter speed you normally use for the flats (in your example 1/20th to 1/30th second) open it in PI, debayer if not done automatically, then move the cursor over the image and read the the average pixel values for the R, G & B channels on the bottom info bar, this is the noise floor of the image and you want the darkest unobscured areas in the actual flats to be at least 0.100 counts above this (1.000 being the normal maximum reading)

For my Nikon a typical dark frame background reading would be R:0.001, G:0.004, B:0.005.

Next take a flat frame with a long exposure, say 5 seconds, to totally saturate the sensor, open the image in PI, debayer and again measure the pixels values in each channel, you should see all channels give 1.000, or thereabouts, note the value.

Now to the flats proper, take a flat frame, with a sheet of paper over the screen to depolarise the light and try your usual setting in Al's Virtual Lightbox, open and debayer the flat in PI then use the STF tool uncoupled to better visualise the image, move the cursor over the image and read out the pixel values, check that for each colour channel the minimum pixel value in the darkest parts of the unobscured image is at least 0.100 counts above the background level you measured for the darks and the maximum value in the middle of the image should not be close to saturation. For flats with my Nikon with the standard lens I measure in the centre R:0.50, G:0.63 and B:0.70, and in the vignetted corners, R:0.15. G:0.20 and B:0.25 using pre-dawn or after sunset sky flats. Be careful not to measure any obscured parts of the flat, where the image circle does not cover the full size of the detector for example.

Your values will be somewhat different but you get the idea, the luminance values for the flats must be above background for the darkest parts of the image and below saturation in the brightest.

If your flat does not reach this then take another flat and increase or decrease the value in Al's Virtual Lightbox till you get in that range. Then you can use that same setting in the future (for the same optical setup)

When the flat is used in calibration, the software only looks at the difference between the brightest and darkest parts of the flat image, if the darkest pixels are 0.2 and the brightest 0.5 it will use the range 0.5-0.2 = 0.3 for the correction, similarly, if the darkest pixels were 0.4 and the brightest 0.7 then the flat would still be correct over the range 0.7-0.4 = 0.3. So the absolute values do not matter so much as the need not to be touching either the noise floor or the saturation level. The difference between brightest and darkest is a linear constant, as you increase or decrease Al's Lightbox settings, or the exposure time, the values for the lightest and darkest pixels will shift up or down but the difference between them remains a constant and the calibration program using the flat only looks at the difference, not the overall "brightness" of the flat.

Here is picture of a correctly exposed flat in PI from a Nikon, annotated with the cursor pixel values for each colour channel in the centre field and edge:

 

For the darks you can dither instead, it is a better solution but it does need a bad pixel map as well to get the best results otherwise you have to use fairly extreme sigma clip settings during combination to fully remove the hot pixels and this risks throwing away some of the data. If used with a bad pixel map then dithering and a moderate sigma reject setting will produce better corrected images with uncooled DSLR's than darks alone.

You will still need darks though for calibrating the flats and for this use a super bias master as a dark, take around a hundred dark bias frames and combine using average, not sigma clip because you need all the rogue pixels to count, then save the image with a new name Flat_Dark. When you calibrate the flats use the Flat_Dark master as a dark frame only and don't add extra bias frames since the bias is already in the flats and the Flat_Dark will calibrate out the bias noise. Sounds complicated but is fairly straight forward in practice.

I have never bothered too much with software light boxes etc for flats, usually I just open a blank white word document, set it to full screen, put the brightness on max and a sheet or two of paper on the laptop or iPad screen and adjust the exposure time until I get in the right range using PI to read the max/min pixel values. Then it is just a matter of taking as many flats as possible and moving the paper a little between each shot to average out any inconsistencies in the paper manufacturing. The combined flat should show no structure, just a nice smooth gradient right across the frame as in the above image when an uncoupled STF is applied in PI.

The use of ADU or histograms is often quoted as a panacea to obtain good flats and I have been guilty of that in the past myself but unless you know the distribution of those average digital units in the flat it can lead you up the garden path. Once you measure and understand the distribution of pixel values in the flat and can relate that to a particular ADU or histogram shape then you can use the ADU in future or look at the histogram shape to determine if the flat exposure was good or not, but this will only be relevant to one particular telescope, camera, set of filters, light source for flats etc, you can't say for example 22K ADU works for my 8" Newt and DLSR and expect that to be true when imaging with a QHY9 and 100mm frac.

I have been imaging recently with a TS 100mm Quad and it has the most bizarre flat field I have ever seen, not a gentle rolling landscape like the South Downs, more like Mont Blanc, nothing wrong with the scope, it is just a result of the optical configuration of the quad lens assembly, but it took a lot of experiment with different flat exposures before I was able to capture the dark vignetting edges with sufficient headroom above the noise floor and not clip the centre peak.

H.T.H.

I've reread this a good few times, don't think I missed anything.

William.

 

Link to comment
Share on other sites

THANKS William, I will try to play with my flats.

However, last night, I got 1 hr of lights on M33 (about 45 minutes after discarding a few), no darks just Bias & flat calibration frames and all processed in PI thanks to everyone's help and advice;

M33_final_image.fit

M33_final_image.jpg

Hopefully to get loads more data on this one and see how that changes the result and see if I can get some colour in the stars etc.

Link to comment
Share on other sites

One way to boost colour:

Create a luminance from your colour data. Process this for maximum detail, noise reduction, etc

Do a histogram stretch on the colour image, then blur by deleting the first 4 wavelet layers.

Create a luminance mask and apply to the colour image, protecting the background. Then use the colour saturation tool aggressively to reveal colour. Remove the mask

Finally, apply lrgb combination with the detailed lum image as L to the colour boosted rgb image.

This usually reveals colour if there is any.

This is a rough outline of a colour boosting workflow.

Good luck

Link to comment
Share on other sites

Hello Wim.

Apologies I am uncertain in regards to your workflow and need some clarification, if that is OK please.

1.   Create a luminance from your colour data. Process this for maximum detail, noise reduction, etc  -    IS THIS STEP ONE OF THE PROCESS, IF SO, HOW WOULD I CREATE A LUMINANCE TO PROCESS FOR MAX DETAIL?  IS THIS EXTRACT CIE "L" COMPONENT METHOD TO DO THE NR?

2.    Do a histogram stretch on the colour image, then blur by deleting the first 4 wavelet layers.   -  IS THIS STRETCH OF THE RESULT FROM NR AND I SELECT THE FULL RGB SETTING ON THE HISTOGRAM TO STRETCH?  WHERE DO I ACCESS THE WAVELET LAYERS TO DELETE?

3,     Create a luminance mask and apply to the colour image, protecting the background. Then use the colour saturation tool aggressively to reveal colour. Remove the mask

4.     Finally, apply lrgb combination with the detailed lum image as L to the colour boosted rgb image.  -   IS THE LUM IMAGE THE RESULTING IMAGE FROM STEP 1 OR 2 & THE COLOUR BOOSTED IMAGE FROM STEP 3?    IS THERE A TUTORIAL FOR THIS?

 

Sorry to pester but i want to fully understand the PI process and appreciate your time to assist me.

Link to comment
Share on other sites

1. After image integration (stacking) you would always start with cropping the image to remove stacking artefacts. After that, the next steps are Background Extraction (either ABE or DBE), Background Neutralization and Colour Calibration. After these four steps are done, you set the RGB workingspace (it's under colorspaces in the process menu) Luminance coefficients to 1. Then you extract the luminance mask. There's a button for this on the toolbar, or use process/colorspaces/channelextraction and extract CIE L*a*b, check "L" only.

You now have a mono image from your colour data. Process this image in the normal way, i.e. sharpening, noise reduction, stretching, etc

2. & 3. After this, work on the original rgb image. Stretch it (masked stretch, adaptive stretch, histogram stretch, etc). There is no need to sharpen details, or do noise reduction, since details and noise are mostly in the lower number wavelet layers (i.e. single pixel to maybe 8-16 pixels scale), and you will discard these. After stretching use the MultiScaleLinearTransform and double click on layer 1 - 4. But leave the R layer with the green checkmark. Then apply to your image. This will remove all fine detail, including the smallest stars, from your image. Next, extract a luminance layer as before (CIE L*a*b L only). This will be used as a mask. You can make the mask work better if you apply HistogramTransform to it and darken the background and lighten the target. Don't mind if you clipe pixelvalues here, this is just a mask. Then apply this mask to your colour image. Next use the ColourSaturation tool (under Process/IntensityTransformation) or the CurvesTransformation tool (click the S button, for saturation) and increase image contrast drastically. After this, if you invert the mask, you can also decrease colour saturation in the background if necessary. Remove the mask when you're done.

4. You now have two images to continue with: the Luminance image which contains all the detailed intensity information of your image, and the blurred colour image, which contains all the colour information of your original image. Now you need to combine these.

Use LRGBcombination (Process/ColorSpaces). Leave the checkmark before the L (luminance) and in the textbox, select the luminance image from step 1 (the one with all the detail). Uncheck the R, G, and B checkboxes. Then apply the tool to the blurred colour image by dragging an instance of the process to the colour image (drag the small triangle in the lower left corner onto your blurred colour image).

These are only the steps that you need to do. Try them first, just to get a feel of the process. You can always undo steps, even further back in your process, redo steps, etc. You will need to experiment with your image to get the best out of it.

Here's the process applied to your image (very quick & dirty). The image seemed already stretched, so I just extracted the luminance and moved the blackpoint in slightly. Then applied MultiscaleMedianTransform with positive bias to layers 2, 3 and 4 (0.04, 0.08, 0.05 respectively) of the mono Luminance image. Didn't bother with noise reduction, deconvolution or even carefull stretching.

Used MultiscaleLinearTransform on the rgb image, deleting wavelet layers 1 - 4. Then extracted luminance from this image. Increased contrast on this luminance and used it as a mask on the colour image. Then applied ColorSaturation twice to the masked colour image. This revealed colour in the outer arms and the core. I removed the mask and applied LRGBCombination, using the first, sharpened luminance image for L. I then resampled and saved as jpeg.

Et voila! All in all, a 5 - 10 minutes process. (But as I said, very Q&D).

M33_final_image_lrgb.jpg

Hope this clarifies some

Link to comment
Share on other sites

Wim, William thank you for your time and effort it is really appreciated.

PI seems a very worthwhile expenditure, I see what people mean by the steep learning curve though!

I am looking to go ahead after the trial finishes and will keep practicing these methods.

Julian

:-)

 

Link to comment
Share on other sites

William

Not sure what is really going on here.  I have taken the dark at 1/20 & 1/30 at ISO 800 and the RGB is as follows ;  R:0.0312, G:0.0315 & B:0.0316 -  Not close to your values of R:0.001, G:0.004, B:0.005 .

I have then taken a saturated flat at 10secs and my values are; R:0.2336, G:0.2335 & B:0.2335, nowhere near 1.000??  These flats were taken at the VLB Max value of 255.

Is there something fundamentally wrong with these values and am I doing my flats wrong?

When I look at my flats in PI they just look grey, is this right?

Screenshot (560).png

 

When I use the STF function this is what I get;

Screenshot (561).png

 

?????????????????????????

Thanks

Link to comment
Share on other sites

I don't know Canon that well but is there a function in the camera for auto dark when using time exposures or auto noise reduction, something like that?

I have read somewhere else about this setting and that it has to be turned off for astro imaging.

The pixel values for the dark frame might be right for your camera but the saturated frames should read maximum, near to 1.000, it is almost as if there is an automatic gain control or something operating in the camera.

When stretched with STF uncoupled, a normal "flat" should not be a uniform grey.

I will post back if I can think of anything, in the meantime hopefully another Canon user might have an idea?

Link to comment
Share on other sites

In your posted image that is grey with the values R:0.2338, G:02335, B:02335, if that is saturated then of course that will flat and featureless as all pixels are at max.

So the black level is around 0.03 and the saturated value 0.23 but have to say that doesn't seem right to me?

Link to comment
Share on other sites

In P.I. on the left side of the desktop, open the "Format Explorer' tab and click on the "DSLR RAW" tab, click on the .cr2 file type and at the bottom of the screen click on the "Edit Preferences" button. Check that the settings are as in the attached image.

 

 

Link to comment
Share on other sites

If you look at one of your flats exposed with the histogram just to the right of the mid point then load it in P.I. and do a STF uncoupled stretch how does that look and what are the maximum and minimum pixels values across the image?

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.