Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

Imaging Galaxies - get rid of 400/500 fl OTAs???


iapa

Recommended Posts

OK, it seems to me that the overall consensus is that a longer FL is the order of the day here.

I will sit down and work out resolutions for my cameras (ASI1600/183/294) with the 10" Quattro (1000mm) along side the 8" SCT (2032mm) and Ritchie-Cretien (1600mm) - l latter pair come to c1200mm with relevant reducers under Bortle 5 skies.

 The other day managed to get this (very heavily cropped) 120m stack with a 250mm RedCat50 and ASI183MC

image.thumb.png.0abe65eff3bd33b9188593a48ed8bd18.png

Link to comment
Share on other sites

The point I poorly attempted to make earlier in the thread was to use a longer FL to cover as much of the sensor as you can.   YES....you will be over sampled.  But you will have more pixels to work with and less "wasted real estate" on the sensor.  Even though you will be over sampled you may be surprised at what is in your image when you start processing. 

The old saying..."nothing ventured nothing gained" applies here.  (^8

My $.02

ASI-2600 at 420mm.jpg

ASI-2600 at 950mm.jpg

ASI-2600 at 2000mm.jpg

ASI-533 at 2000mm.jpg

Edited by CCD-Freak
Link to comment
Share on other sites

14 minutes ago, CCD-Freak said:

YES....you will be over sampled.  But you will have more pixels to work with and less "wasted real estate" on the sensor.  Even though you will be over sampled you may be surprised at what is in your image when you start processing. 

You significantly lower SNR if you keep your data oversampled.

If you really want to have "more pixels to work with" - why don't you simply sample at proper sampling rate so you don't waste SNR and then enlarge image while data is still linear - before you start to process. Result will be the same - "more pixels" to work with and level of detail will be the same - except you won't loose SNR by over sampling.

  • Like 1
Link to comment
Share on other sites

2 hours ago, CCD-Freak said:

The point I poorly attempted to make earlier in the thread was to use a longer FL to cover as much of the sensor as you can.   YES....you will be over sampled.  But you will have more pixels to work with and less "wasted real estate" on the sensor.  Even though you will be over sampled you may be surprised at what is in your image when you start processing. 

The old saying..."nothing ventured nothing gained" applies here.  (^8

My $.02

ASI-2600 at 420mm.jpg

ASI-2600 at 950mm.jpg

ASI-2600 at 2000mm.jpg

ASI-533 at 2000mm.jpg

You're assuming, here, that 'wasted real estate' applies only to sky outside the object which you will crop and discard. I think this is erroneous: it is also a waste of real estate to use four pixels to collect the same information as might be perfectly well collected on two. Worse, not only does it waste real estate but it adds noise.

Olly

Edited by ollypenrice
Link to comment
Share on other sites

On 14/03/2022 at 18:00, iapa said:

As an imager with a preference to imaging galaxies and little results to date , I am reconsidering ownership of my 400/500 mm OTAs (RASA8",Equinox80ED Pro, Esprit ED80 Pro) and just using 8" SCT & RC along with the 8" & 10" 1000mm reflectors.

Looking for advice as to whether this is a reasonable position to take 

A SW 200 PDS. and a decent CC. jobs a good un.

Or get greedy like I did when you see a 300PDS on ebay and end up with a monster. But tbh, I doubt I get much benefit over a 200 most of the time, unless the goal is to ensure I have a telescope large enough to fit into at all times.

Not sure I'd bother with the SCT. I've got a C9.25 but tbh I've found my newts better for galaxies so far (750, 1000 and 1500mm).

I do agree with not getting too bogged down with the theory (sorry Vlaiv) - also, as I think Olly mentioned - consider shooting mono - make every pixel count rather than losing resolution to a OSC camera ?

stu

Link to comment
Share on other sites

32 minutes ago, CCD-Freak said:

So what FL would you suggest for an ASI-2600MC with 3.76u pixels.  Are you trying to get close to 2" per Pixel?

 

Depends on what you are after.

If you want to do wider field imaging, or use small scope up to say 80-100mm, nothing wrong with aiming at 2"/px. In fact, with such small telescope (80mm for example) - you'll be hard pressed to go higher than 2"/px.

Want to go at 1.5"/px - then get 6"-8" telescope.

I'd personally aim at say 1000-1100mm scope with OSC camera and skip interpolation debayering.

8" F/5 newtonian is good combination for 1.5"/px - just make sure you get good coma corrector that does not introduce spherical aberration as that will lower optical resolution of the telescope and make it hard to actually achieve 1.5"/px

Want to attempt imaging close to 1.2"/px? First - make sure your mount is up to it. Get one that guides below 0.5" RMS - preferably in 0.2-0.3 RMS range.  Also, make sure you get 1.5" FWHM seeing nights during the year. Get 12" RC or 11" EdgeHD and skip interpolation debayering (super pixel mode) and bin x2 resulting data.

Want to attempt 1"/px? - Don't. :D

 

 

 

Link to comment
Share on other sites

2 hours ago, powerlord said:

sorry if I'm being thick, but if you skip interpolation bebayering surely it'll just me a coloured mess ?

Sorry, my fault, I wasn't clear - this is what I meant:

- skip interpolation debayering

and not

- skip interpolation debayering

Or in plain English - don't debayer using interpolation method as that artificially raises sampling rate. Use some debayering method that does not do this - like super pixel or preferably channel split debayering.

  • Thanks 1
Link to comment
Share on other sites

13 hours ago, powerlord said:

SW 200 PDS. and a decent CC. jobs a good

+1, or lest we forget that the OP has something even better, his 10" f4.

We're dslr only. 1000mm is the maximum we've found usable on a regular basis, although at this time of year, the simplicity of our 6" f8s -without the need for any corrector- make them difficult to resist at 1200mm.

Throw as many seconds per pixel or resolving inches at us as you like -interesting though that may be on cloudy nights-, our just-get-out-and-do-it approach suggests that the seeing is the biggest (only?) factor in determining how good an image you'll get.

What we certainly do not find is that enlarging an image of m101 taken with say a 130pds gets anywhere like the quality of the same target with a 200pds [1]

Cheers

[1] I've just been nudged. No, it's true. We don't use a 200pds, we use a 203mm f5 from Taiwan, However I don't think the difference between a 200mm and 203mm mirror would make any difference.

Edited by alacant
  • Like 1
Link to comment
Share on other sites

16 hours ago, CCD-Freak said:

So what FL would you suggest for an ASI-2600MC with 3.76u pixels.  Are you trying to get close to 2" per Pixel?

 

I find that images shot at 0.9"PP look fine at 100%. The ones I used to shoot at 0.6"PP did not. When I resampled the 0.6"PP images down to the size of the 0.9"PP ones they looked equivalent. I could find no consistent improvement in resolution between the two, though seeing during the runs was not always equivalent so I think occasions when one beat the other were due to that. I have never attempted to find out the image size at which I genuinely start to lose final resolution but it will be higher than 0.9"PP so I think even that is oversampled. (My Mesu mounts run reliably at about 0.3" RMS.)

If you're interested in this, find out the maximum screen size your potentially oversampled image will sustain. (That's the maximum size at which you feel it looks good. Yes, subjective, but our aim in the end is to look at our pictures!) If that turns out to be 66% then there was no point in sampling at your present scale. You'd get the same or better from a system sampling of 66% of that.

Amidst all the theory, there's the practice. I don't believe a 10inch SCT will give stars as tight as a 5 inch refractor. The theory can say what it likes but I find SCTs to give bigger, softer stars.

Olly

Link to comment
Share on other sites

2 hours ago, ollypenrice said:

Amidst all the theory, there's the practice. I don't believe a 10inch SCT will give stars as tight as a 5 inch refractor. The theory can say what it likes but I find SCTs to give bigger, softer stars.

Olly

That's certainly been my experience. My SCT - C9.25, used with a F6.3 reducer gives me about 1300mm. But the stars I get with my 200 (1000mm) and 300 (1500mm) are definately sharper. Only benefit is it's portability (well, and using at native FL on planets and stuff)

  • Like 1
Link to comment
Share on other sites

7 hours ago, vlaiv said:

Sorry, my fault, I wasn't clear - this is what I meant:

- skip interpolation debayering

and not

- skip interpolation debayering

Or in plain English - don't debayer using interpolation method as that artificially raises sampling rate. Use some debayering method that does not do this - like super pixel or preferably channel split debayering.

ah ok - I use Siril for debayering usually, which does not support super pixel (I assume is drizzling?) or channel split, but I note they suggest that there RCD is pretty good and the default:

https://siril.org/2020/09/whats-new-in-siril-0.99.4/

Link to comment
Share on other sites

21 hours ago, CCD-Freak said:

So what FL would you suggest for an ASI-2600MC with 3.76u pixels.  Are you trying to get close to 2" per Pixel?

 

 

" per pixel =(pixel_size__um*Binning/combined_focal_length)*206.3

you can swap this around as necessary 

combined focal length includes reducers

 

Link to comment
Share on other sites

37 minutes ago, powerlord said:

ah ok - I use Siril for debayering usually, which does not support super pixel (I assume is drizzling?) or channel split, but I note they suggest that there RCD is pretty good and the default:

https://siril.org/2020/09/whats-new-in-siril-0.99.4/

You can use the "seqsplit_CFA" command to create split frames from the loaded sequence subs. Only works if the channels are still intact and debayering has not taken place. For a 100 subs you will get 100 subs each with the prefixes CFA_0 to CFA_3, so 400 subs. 2 of these will be green, 1 blue and 1 red, you have to figure out which number is which yourself but for me CFA_0 and CFA_3 are green, 1 is red and 2 is blue.

I have a script for SiriL that calibrates my data and then outputs split monochrome frames. Takes a few minutes to run on a batch of a few hundred subs. Subsequent stacking is much faster since its now mono data and 1/4th the size in megapixels per frame.

Edited by ONIKKINEN
  • Like 1
Link to comment
Share on other sites

2 minutes ago, powerlord said:

Thanks. Good to know. Have you any examples that show a direct comparison with RCD to show it's better and if so, how much ?

stu

I did run some tests on SNR and star fwhm but got some results i cant explain. The splitstacked image had the best SNR, but worse star sizes compared to stacking with RCD interpolation and then binning the stack 2x2. Not sure why that is, maybe theres someone who could answer that but its not me. Hopefully i remember this right but ill have to check the data i got when i get home.

Ill post what i found in a few hours, if i ever get off work that is.

  • Like 1
Link to comment
Share on other sites

7 minutes ago, powerlord said:

Thanks. Good to know. Have you any examples that show a direct comparison with RCD to show it's better and if so, how much ?

stu

RCD is another interpolation technique.

With OSC sensors - every other pixel in X and Y is red, similarly every other pixel is blue and you can look at green being green1 and green2 (although they are the same color) - and in that regard - they are also spaced by another pixel.

image.png.531b21057a5dad31ddbcf6054bbd92d3.png

Interpolation debayering "makes up" pixel values - however "educated" guess that is - it is still calculated value and can't replace genuine detail.

For example - in first row second pixel is green. In order to add red and blue color to it - two adjacent R pixels can be averaged and since this is close to edge - only one blue pixel (one below) can be used to replicate the color.

In any case - how ever you slice it - interpolation debayering is like zooming in to 200% using software - image gets bigger and no detail is resolved at that scale.

There are other options to consider:

1. Using super pixel mode or as @ONIKKINEN mentioned - splitting bayer matrix into color components - which produces results as if one is using "mono + filter" camera with half the resolution in X and Y direction.

2. Using bayer drizzle - proper implementation of bayer drizzle will work - but at expense of SNR. Data needs to be dithered properly and it will almost reconstruct full resolution that would mono version of sensor have.

 

Link to comment
Share on other sites

8 minutes ago, ONIKKINEN said:

I did run some tests on SNR and star fwhm but got some results i cant explain. The splitstacked image had the best SNR, but worse star sizes compared to stacking with RCD interpolation and then binning the stack 2x2. Not sure why that is, maybe theres someone who could answer that but its not me. Hopefully i remember this right but ill have to check the data i got when i get home.

Ill post what i found in a few hours, if i ever get off work that is.

Worse star sizes in what respect FWHM or visual size of stars after stretching?

When you split bayer matrix and then stack resulting frames - selection of alignment interpolation method is very important. Using say bilinear interpolation for sub alignment will have less impact on over sampled image, so high quality interpolation needs to be used for split data.

In any case - compare star size with robust FWHM estimation method. Some FWHM algorithms have issue with actual FWHM measurement because they rely on background noise estimation. I think that AstroImageJ has good FWHM measurement tool - maybe use that one to compare star sizes.

Link to comment
Share on other sites

2 hours ago, vlaiv said:

Worse star sizes in what respect FWHM or visual size of stars after stretching?

When you split bayer matrix and then stack resulting frames - selection of alignment interpolation method is very important. Using say bilinear interpolation for sub alignment will have less impact on over sampled image, so high quality interpolation needs to be used for split data.

In any case - compare star size with robust FWHM estimation method. Some FWHM algorithms have issue with actual FWHM measurement because they rely on background noise estimation. I think that AstroImageJ has good FWHM measurement tool - maybe use that one to compare star sizes.

I measured FWHM using the SiriL photometry tool. Have not tested whether or not i can spot the difference visually so not sure if the difference matters, or not actually sure how to do an apples to apples direct comparison in terms of stretching equally. Both images were from the same 120 calibrated subs, just went through different routes to get to the final RGB image. I am pretty sure i used the default interpolation method for registering: Pixel area relation.

But here is what i found using the SiriL photometry tool around a star and running the "noise estimation" tool in SiriL (i think its just standard deviation):

RCD interpolated, then binned 2x2 in ASTAP:

Full Width Half Maximum:
        FWHMx=3.87"
        FWHMy=3.68"

RMSE:
        RMSE=5.577e-04
17:06:50: Background noise value (channel: #0): 2.416 (3.687e-05)
17:06:50: Background noise value (channel: #1): 2.764 (4.217e-05)
17:06:50: Background noise value (channel: #2): 2.363 (3.605e-05)

--------------------------------------

Bayer split, then recomposited as RGB:

Full Width Half Maximum:
        FWHMx=4.89"
        FWHMy=4.65"

RMSE:
        RMSE=2.526e-04

19:24:44: Background noise value (channel: #0): 1.351 (2.062e-05)
19:24:44: Background noise value (channel: #1): 1.555 (2.372e-05)
19:24:44: Background noise value (channel: #2): 2.441 (3.725e-05)

 

Running the multi-aperture thing in AstroimageJ around the same star i used in SiriL gives me:

RCD and bin: FWHM 2.64px

980484512_SeeingProfile-astap2x2bin.png.5170c136d2943f27af2ad110113b8356.png

Bayer split: FWHM 2.91px

1321598402_SeeingProfile-split.png.16a53460259c7c0fc533ebc7f212fe58.png

Again there is a difference in size but this time the difference is less severe. There is also a difference in brightness, apparently. Wonder what went wrong 😬.

I was going to make a thread investigating different binning and resampling methods for OSC and how they differ, but i realized i dont understand the data i got from my tests. I understand some points, like star size and background noise, but i dont understand why they would be so different so just kind of forgot about it.

Link to comment
Share on other sites

20 minutes ago, ONIKKINEN said:

I am pretty sure i used the default interpolation method for registering: Pixel area relation.

If you don't mind the doing another go - try using Lanczos interpolation for registration - just to see what sort of difference you'll get in star FWHM.

I suspect that pixel area relation functions pretty much as linear interpolation and will introduce significant blur and widen FWHM - more so when data is closer to proper sampling (with over sampling you already have blurring on pixel level so additional blurring does not make much of a difference, but if data is sharp - it will be noticed).

  • Thanks 1
Link to comment
Share on other sites

1 hour ago, vlaiv said:

If you don't mind the doing another go - try using Lanczos interpolation for registration - just to see what sort of difference you'll get in star FWHM.

I suspect that pixel area relation functions pretty much as linear interpolation and will introduce significant blur and widen FWHM - more so when data is closer to proper sampling (with over sampling you already have blurring on pixel level so additional blurring does not make much of a difference, but if data is sharp - it will be noticed).

Well i did that and now FWHM is smaller in the split version compared to the RCD+BIN2 version. Curiously noise in standard deviation is lower in the RCD+BIN version, but in the RMSE (whatever this means) its the other way around. Interestingly noise measurements of both kind are much higher than with pixel area relation, so this means you are right and some kind of blurring (acting as denoising) could take place when not wanted?

RCD debayer + BIN2:

Full Width Half Maximum:
        FWHMx=3.83"
        FWHMy=3.50"

RMSE:
        RMSE=5.721e-04
02:06:39: Background noise value (channel: #0): 2.896 (4.419e-05)
02:06:39: Background noise value (channel: #1): 3.270 (4.990e-05)
02:06:39: Background noise value (channel: #2): 2.845 (4.342e-05)

------------------------------------

Bayer split:

Full Width Half Maximum:
        FWHMx=3.71"
        FWHMy=3.50"

RMSE:
        RMSE=5.690e-04
01:57:23: Background noise value (channel: #0): 4.011 (6.120e-05)
01:57:23: Background noise value (channel: #1): 3.725 (5.683e-05)
01:57:23: Background noise value (channel: #2): 3.691 (5.633e-05)

-------------------------------------

This time AstroimageJ reports both as 2.60 pixels in FWHM, which is close enough to SiriL measurements.

I avoided Lanczos-4 before because it left some cold pixel artifacts around some stars, but ill just deal with this with cosmetic correction or something since a drop of 1'' in FWHM is pretty significant.

  • Like 1
Link to comment
Share on other sites

11 hours ago, ONIKKINEN said:

Interestingly noise measurements of both kind are much higher than with pixel area relation, so this means you are right and some kind of blurring (acting as denoising) could take place when not wanted?

I think that pixel area relation (not sure what it is, but by the sound of the name of it) is pretty much the same as linear interpolation as long as there is no significant rotation, and even with rotation, it won't be as good as other interpolation methods.

Choice of interpolation method is very important as it determines pixel-to-pixel correlation which is balance between sharpness and denoising due to blur.

I personally prefer sharpness as denoising due to that slight blur is only in high frequency domain so low frequency domain noise remains unaltered.

In the end - I think that this experiment shows something most people don't really think about - resolution of OSC camera is indeed half of what pixel size suggests and interpolation debayering is the same as taking mono image and enlarging it to 200% in software - unnecessary and void of detail. This is why I always prefer split / super pixel approach.

  • Like 1
Link to comment
Share on other sites

As an FYI, an AVX controlled by ASIAir Pro, with my RadCat 51 and an ASI 183, Slideguide focuser and ASI174 guide cam

So,  1.98"/pixel (about 2 guiding RMS), and a guide ratio of 5.1 - somewhat under-sampling.

I'm currently looking at a guiding RMS of <1"

Not sure I believe that on an AVX.

If I can match, or better, that on the CGX-L with the Quattro CF 10" f4, I suppose I should get something reasonable.

I'll be searching though the quagmire known as my back garden for the pier in next couple of days.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.