Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

New to Planetary imaging.I


bosun21

Recommended Posts

I just attempted to take my first ever image of Jupiter and i have the capture process pretty well understood (i think). I use Sharpcap for the capture and then PIPP and Autostakkert 3 then finally Registax 6. What is the best way to use the wavelets? Is it linear, linked wavelets, or some other setting? I seem to be losing the colour along the way. I am determined to stick at it and progress in this dark art. SW 150 Maksutov, ASI585MC.

 

20_17_57_jupiterLAST_lapl5_ap43FF.jpg.83d833f6c097d4c37ad6dfe1c72e096a.jpg

Edited by bosun21
  • Like 4
Link to comment
Share on other sites

this is how I use registax (varies each time of course) tweak the wavelets, then up the denoise when it gets noisy.

If you're just getting into this you might want to use AstroSurface, this seems to be quite popular and is updated, Registax is now pretty old but I stick with it as I'm used to it.

https://astrosurface.com/pageuk.html

image.png.0519cd3b19e155477c7a52b2f73694e9.png

 

  • Thanks 1
Link to comment
Share on other sites

Excellent first capture Ian. I have moved to Astrosurface for most of my wavelets and denoise. You will pull more details. Later on you can throw imageanalyzer in the mix. If this capture is from last night, then it will be hard to get anything more out of it as seeing was awful (our seeing was probably similar as we aren't too far away from each other). Don't worry about colour as you can boost the saturation later. I would increase the white point in as!3 to 65-70%. And do a colour balance in registax (somehow I don't like the colour balance in Astrosurface). Try capture when Jupiter is high up it will help. I am not sure how you focus but try do it on the planet.

  • Thanks 1
Link to comment
Share on other sites

Most people don't understand color processing part of workflow.

One of important steps is to properly encode color information for display in sRGB color space. Camera produces linear data, while sRGB color space is not linear - it uses gamma of 2.2, and hence gamma of 2.2 needs to be applied to the data during processing.

This turns murky colors into nice looking ones:

slika.png.d5fd27a83d2e00bbf4fe95a186dd291c.png

On wavelets - I use linear / Gaussian and actual slider positions will depend on sampling rate and noise levels so you need to play around with those.

If you've oversampled considerably - try increasing initial layer to 2

  • Like 1
Link to comment
Share on other sites

6 hours ago, Kon said:

Excellent first capture Ian. I have moved to Astrosurface for most of my wavelets and denoise. You will pull more details. Later on you can throw imageanalyzer in the mix. If this capture is from last night, then it will be hard to get anything more out of it as seeing was awful (our seeing was probably similar as we aren't too far away from each other). Don't worry about colour as you can boost the saturation later. I would increase the white point in as!3 to 65-70%. And do a colour balance in registax (somehow I don't like the colour balance in Astrosurface). Try capture when Jupiter is high up it will help. I am not sure how you focus but try do it on the planet.

Thanks Kostas. I have actually just downloaded Astrosurface and image analyser after doing a bit of snooping on previous threads. Astrosurface looks very intuitive to use and I will load a few SER files tomorrow and play around with it after watching a couple of YouTube videos. Yes they were taken last night and the seeing was pretty poor as you say. My Maksutov is f12 and 5 X 2.9 pixels = 14.5. Does that mean I'm slightly undersampled? I do focus on the planet using an electric focuser on a really slow speed as I find any vibration very off-putting when focusing. Do you use Astrosurface from start to finish or just for the final processing? Thanks again.

Edited by bosun21
  • Like 1
Link to comment
Share on other sites

5 hours ago, vlaiv said:

Most people don't understand color processing part of workflow.

One of important steps is to properly encode color information for display in sRGB color space. Camera produces linear data, while sRGB color space is not linear - it uses gamma of 2.2, and hence gamma of 2.2 needs to be applied to the data during processing.

This turns murky colors into nice looking ones:

slika.png.d5fd27a83d2e00bbf4fe95a186dd291c.png

On wavelets - I use linear / Gaussian and actual slider positions will depend on sampling rate and noise levels so you need to play around with those.

If you've oversampled considerably - try increasing initial layer to 2

I think I'm slightly undersampled as scope is f12 and 5 X pixel size = 14.5. Or have I got the over/under the wrong way round? Regarding gamma I have only set that at 0.5  as that's what they used in the tutorials I watched on YouTube. I will try the higher settings next time. Thanks.

Link to comment
Share on other sites

8 hours ago, bosun21 said:

I think I'm slightly undersampled as scope is f12 and 5 X pixel size = 14.5. Or have I got the over/under the wrong way round? Regarding gamma I have only set that at 0.5  as that's what they used in the tutorials I watched on YouTube. I will try the higher settings next time. Thanks.

I'd say that is correct sampling. It is very hard to capture all the detail at short end of scale because of atmosphere - it bends short wavelengths the most and seeing is the worst at 400nm (at blue / violet side of spectrum). For that reason, I advocate aiming for x4 pixel size - in your case that would be F/11.6 - so you are right about there. I would not worry too much about being "slightly undersampled".

Gamma at capture and stacking time should be kept at 1.0 - or neutral.

Only as a final step of adjustment, after you do sharpening and all - you should get it to 2.2.

Data should be kept linear for most of processing workflow (which means gamma 1.0) - especially during capture (so that Youtube tutorial is wrong in using 0.5 for capture).

  • Like 1
  • Thanks 1
Link to comment
Share on other sites

Excellent first capture. As @Kon says, I'm increasingly relying on Astrosurface for wavelets, though I do still hit the initial stack out of AS3! with a light wavelets and colour balace in Registax, before derotation in WinJupos. After that it's Astrosurface and Image Analyser (IA) though IA isn't really necessary until you've got a handle on Regisatx and/or Astrosurface.

I've used both linear and gaussian in Registax in the past, but in the last couple of years have settled on linear without any sharpen or denoise. These are fairly typical settings prior to WinJupos derotation.

image.png.6a567d0bcaf5ac712a084f0b9ace2048.png

If I was just processing a single stack to conclusion, then I'd hit it a bit harder, say something like this...

image.png.4ace43a66493f5c7201f81dced4c3544.png

Every session requires something different and I find that I have better control with wavelets and denoise in Astrosurface after I've derotated and stacked the TIFFs from a few SERs. Something like this...

image.png.00b5982d73fc66c0332fe20182f3d0c4.png

Good luck, keep trying and keep asking....

Edited by geoflewis
  • Like 1
Link to comment
Share on other sites

1 hour ago, geoflewis said:

Excellent first capture. As @Kon says, I'm increasingly relying on Astrosurface for wavelets, though I do still hit the initial stack out of AS3! with a light wavelets and colour balace in Registax, before derotation in WinJupos. After that it's Astrosurface and Image Analyser (IA) though IA isn't really necessary until you've got a handle on Regisatx and/or Astrosurface.

I've used both linear and gaussian in Registax in the past, but in the last couple of years have settled on linear without any sharpen or denoise. These are fairly typical settings prior to WinJupos derotation.

image.png.6a567d0bcaf5ac712a084f0b9ace2048.png

If I was just processing a single stack to conclusion, then I'd hit it a bit harder, say something like this...

image.png.4ace43a66493f5c7201f81dced4c3544.png

Every session requires something different and I find that I have better control with wavelets and denoise in Astrosurface after I've derotated and stacked the TIFFs from a few SERs. Something like this...

image.png.00b5982d73fc66c0332fe20182f3d0c4.png

Good luck, keep trying and keep asking....

 

Fantastic images. I have had my head buried in all the software and practicing on each of them. Winjupos is next on the list for me to learn, at what stage do you de rotate? I initially thought that you had to do it before stacking with a SER file or am i way off the mark? Thanks.

  • Thanks 1
Link to comment
Share on other sites

1 hour ago, bosun21 said:

Winjupos is next on the list for me to learn, at what stage do you de rotate? I initially thought that you had to do it before stacking with a SER file or am i way off the mark? Thanks.

You have a choice. You can derotate the SERs then grade and stack the best frames, or grade and stack the best frames in AS3!, then derotate the resulting TIFFs, which is my preferred method. The advantage of derotating the SER is that you can capture much longer video files, however, they can become huge and in my experience will roughly double in size after derotation.

My method goes like this.

  1. Capture a number of short (typically 1m) SER files, which if using a mono camera will be for each RGB filter, or IR, CH4 etc. I'm now using a colour camera ASI462MC, so just capture 1m colour SERs and since this camera is very sensitive in IR, can also capture IR and CH4 data with it.
  2. Take each SER through AS3! and typically stack best 15%-25%. Check analysis graph for % frames above 50% line. You want to have enough frames for a stack that isn't too noisy, without including poor quality frames. Export as TIFF.
  3. Open the TIFF from 2 above in Registax (could use Astrosurface, but I still use Registax at this stage). Apply a mild wavelets, just enough to reveal features without sharpening. The idea here is to give enough detail for WinJupos to lock onto features during the measurement stage. I also use Registax RGB balance tool here and if you've not RGB aligned in AS3!, then do that here. I also check the histogram tool and lower the white point to brighten the image, but not too much to avoid clipping. Export file as TIFF (renaming to not overwrite original - I usually add R6 and wavelet settings, eg _R6(1-1-1-10-20-30). Long file name I know, but I find it useful when reviewing later.
  4. Open the TIFFs from stage 3 in WinJupos. Select the correct planet, then use the measurement tool to accurately align the reference frame to the image. Take time to get this accurate and if available use any moons in your image for alignment.
  5. Load the measurement files into the derotation of images (or RGB frames if shooting mono). Adjust the LD (limb darkening factor - I use between 65-85, but is trial and error). Set details for observer, file name, orientation (north/south up), file type (eg TIFF), etc., select folder for image to be saved, then compile image.
  6. Open the derotated and stacked TIFF from WinJupos in Astrosurface (or Registax if you prefer) and apply final wavelets, denoise, etc.

Hope this helps.

  1.  
  • Thanks 2
Link to comment
Share on other sites

57 minutes ago, geoflewis said:

You have a choice. You can derotate the SERs then grade and stack the best frames, or grade and stack the best frames in AS3!, then derotate the resulting TIFFs, which is my preferred method. The advantage of derotating the SER is that you can capture much longer video files, however, they can become huge and in my experience will roughly double in size after derotation.

My method goes like this.

  1. Capture a number of short (typically 1m) SER files, which if using a mono camera will be for each RGB filter, or IR, CH4 etc. I'm now using a colour camera ASI462MC, so just capture 1m colour SERs and since this camera is very sensitive in IR, can also capture IR and CH4 data with it.
  2. Take each SER through AS3! and typically stack best 15%-25%. Check analysis graph for % frames above 50% line. You want to have enough frames for a stack that isn't too noisy, without including poor quality frames. Export as TIFF.
  3. Open the TIFF from 2 above in Registax (could use Astrosurface, but I still use Registax at this stage). Apply a mild wavelets, just enough to reveal features without sharpening. The idea here is to give enough detail for WinJupos to lock onto features during the measurement stage. I also use Registax RGB balance tool here and if you've not RGB aligned in AS3!, then do that here. I also check the histogram tool and lower the white point to brighten the image, but not too much to avoid clipping. Export file as TIFF (renaming to not overwrite original - I usually add R6 and wavelet settings, eg _R6(1-1-1-10-20-30). Long file name I know, but I find it useful when reviewing later.
  4. Open the TIFFs from stage 3 in WinJupos. Select the correct planet, then use the measurement tool to accurately align the reference frame to the image. Take time to get this accurate and if available use any moons in your image for alignment.
  5. Load the measurement files into the derotation of images (or RGB frames if shooting mono). Adjust the LD (limb darkening factor - I use between 65-85, but is trial and error). Set details for observer, file name, orientation (north/south up), file type (eg TIFF), etc., select folder for image to be saved, then compile image.
  6. Open the derotated and stacked TIFF from WinJupos in Astrosurface (or Registax if you prefer) and apply final wavelets, denoise, etc.

Hope this helps.

  1.  

Thanks for taking the time to explain your procedure to me. It’s very much appreciated. Do I need to have the frames captured in the Winjupos time format during capture? I think I’ll just carry on with my method including the addition of Astrosurface, and as I become efficient I’ll then add Winjupos and Image Analyser. Otherwise I feel I will get slightly overwhelmed. A sincere thank you Geof.

Link to comment
Share on other sites

17 minutes ago, bosun21 said:

Do I need to have the frames captured in the Winjupos time format during capture?

It's not essential, but highly desirable as makes it image measurement in WinJupos so much easier.

18 minutes ago, bosun21 said:

I think I’ll just carry on with my method including the addition of Astrosurface, and as I become efficient I’ll then add Winjupos and Image Analyser. Otherwise I feel I will get slightly overwhelmed.

In my experience, it's definitely worth beoming comfortable with each step / tool, before plunging in, though some people get to grips with this stuff faster than others (I'm slow 🙂 ). WinJupos is a fairly straightford, though somewhat tedious process, but can really help extract those last fine details when seeing has been good. Have a try once you're ready, we're here to help if you need it. Image Analyser is very new to me and really only adds very subtle benefits/improvements, that most people won't detect, so I'd say last on your list of things to learn - others may disagree with me of course.

  • Thanks 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.