Jump to content

March 30, 2021: Jellyfish in HOO (WIP)


Recommended Posts

Managed to get roughly an hour of usable H-alpha, and just over 2 hours of O-III on the Jellyfish, and combined this with the 2 hours 2 minutes of H-alpha of the 29th. Stacked and combined the lot in APP, and boy, is that O-III faint (not that H-alpha is that bright). I really should get data from a darker site. Still not totally unhappy with the result after some tweaks in Gimp

IC-443-HOO-image-2.thumb.jpg.560ebdc607e2254147c9784fffa2d6ce.jpg

Much more data is needed. Questions for tonight (forecast to be clear again): should I try for more O-III, or beef up the H-alpha signal first? Is there any point in going for S-II?

 

  • Like 4
Link to comment
Share on other sites

What scope were you using, gain settings and what was your exposure length?

ASI183 has very small pixels and that means that you need quite a bit longer exposure length in narrowband than is customary to CMOS sensors.

Link to comment
Share on other sites

5 minutes ago, vlaiv said:

What scope were you using, gain settings and what was your exposure length?

ASI183 has very small pixels and that means that you need quite a bit longer exposure length in narrowband than is customary to CMOS sensors.

I was working at F/4.8, using my APM 80mm F/6 with focal reducer. Given I haven't got the mount on speaking terms with the computer, I just use tracking, and this limits me to 60 s subs. I just stack loads of them (this is only 3 hours of H-alpha and 2 of O-III). I use a fairly high gain (300) which lowers readout noise, but of course reduces dynamic range. I am thinking of using 2x2 binning on the O-III as it is so faint. I hope to grab another 3 hours total tonight. My main question is how to use that time to optimally improve the image in the short term. Do I go all-out for one of the bands, or do I split the time between them? Would adding S-II help, or even just RGB (2x2 binned) for an hour each to get the star colours right. Of course I want to gather a wagonload of data in all bands, but the question is where to start.

Link to comment
Share on other sites

43 minutes ago, michael.h.f.wilkinson said:

Given I haven't got the mount on speaking terms with the computer, I just use tracking, and this limits me to 60 s subs.

I think that you want to bump that up by factor of x10. If you can - fix the link between the mount and computer.

Your sampling rate is 1.29"/px - which is very low and although each pixel only receives ~1.65e (300 gain) of read noise - you are nowhere near swamping it with background sky noise in narrowband.

I ran some calculations and these are stats that I've got. I used 7nm filter, 65% QE at 656nm and roughly SQM 19.2 (from lightpollution info map at your place - not sure if I have correct info there).

image.png.3f1272e92406d433128594b39e8f008b.png

These are SNR values on very faint target (around  mag24 / arc second squared) after two hours of exposure - using 60s, 300s, 600s and 900s subs.

Transition between 600s and 900s is showing minimal improvement (about 2-3%). But there is improvement in SNR of 55% between 60s and 600s. That is equivalent to stacking additional 140% subs (square root of 2.4 is ~1.55).

Do bin x2 all of your data since 1.29"/px is simply too low for 80mm scope.

You can use Tri band filter as luminance if you want to further improve things. Use Triband + Ha, OIII, SII as you would L+R,G,B - most of the time spent in triband and some time spent in getting the color. You can also spend minimal time on RGB star colors - that makes nice addition.

But I would say primary thing is exposure length.

Link to comment
Share on other sites

44 minutes ago, vlaiv said:

I think that you want to bump that up by factor of x10. If you can - fix the link between the mount and computer.

Your sampling rate is 1.29"/px - which is very low and although each pixel only receives ~1.65e (300 gain) of read noise - you are nowhere near swamping it with background sky noise in narrowband.

I ran some calculations and these are stats that I've got. I used 7nm filter, 65% QE at 656nm and roughly SQM 19.2 (from lightpollution info map at your place - not sure if I have correct info there).

image.png.3f1272e92406d433128594b39e8f008b.png

These are SNR values on very faint target (around  mag24 / arc second squared) after two hours of exposure - using 60s, 300s, 600s and 900s subs.

Transition between 600s and 900s is showing minimal improvement (about 2-3%). But there is improvement in SNR of 55% between 60s and 600s. That is equivalent to stacking additional 140% subs (square root of 2.4 is ~1.55).

Do bin x2 all of your data since 1.29"/px is simply too low for 80mm scope.

You can use Tri band filter as luminance if you want to further improve things. Use Triband + Ha, OIII, SII as you would L+R,G,B - most of the time spent in triband and some time spent in getting the color. You can also spend minimal time on RGB star colors - that makes nice addition.

But I would say primary thing is exposure length.

Sky background completely dominates the read noise and dark current here, even in narrowband (they are 12 nm filters, BTW), especially around full moon.

Link to comment
Share on other sites

6 minutes ago, michael.h.f.wilkinson said:

Sky background completely dominates the read noise and dark current here, even in narrowband (they are 12 nm filters, BTW), especially around full moon.

Did you actually measure sky background in your subs?

Should be quite easy to do - take calibrated sub, measure background ADU - divide with 16 and then multiply with e/ADU unit for gain 300.

If you are to swamp read noise, you should get value that is 68 or higher. We can reverse things and go the other way - 300 gain should be 0.1259 e/ADU, so 68e should be 540 and that x16 (as ASI183 is 12bit ADC) gives 8642ADU.

Is your background level in calibrated sub 8642?

Link to comment
Share on other sites

11 minutes ago, vlaiv said:

Did you actually measure sky background in your subs?

Should be quite easy to do - take calibrated sub, measure background ADU - divide with 16 and then multiply with e/ADU unit for gain 300.

If you are to swamp read noise, you should get value that is 68 or higher. We can reverse things and go the other way - 300 gain should be 0.1259 e/ADU, so 68e should be 540 and that x16 (as ASI183 is 12bit ADC) gives 8642ADU.

Is your background level in calibrated sub 8642?

Haven't done so yet, may do so in the future. Key issue here is that this mount carrying the lighter scope is an EQ3-2 without goto system (it does have an ST-4 guide port) for which I cannot find any ASCOM driver. The GP-DX mount has an RS232 interface for which I have a USB-RS232 cable, and for which drivers have been installed, but Windows 10 does not allow ASCOM to use the COM port. Permission settings are nowhere to be found. Thus, for the time being, I am stuck with tracking. Sometimes I get polar alignment close enough I can go to 120s, but beyond is unknown territory for now.

 

Link to comment
Share on other sites

I did some calculations @vlaiv, and I do not think you are right. I am looking strictly at the noise in terms of electrons, so before any stretching by the ADC. The amplification by the gain setting does not alter the ratios of the noise contributions, it is just a linear multiplier.

The read noise of 1.65 electrons corresponds to the noise caused by 2.7225 thermal electrons (using Poissonian noise statistics). The darkest sub, calibrated for dark current and flat-fielded has a background of about 4100 ADU counts (also without flats), or roughly 256 electrons (using your division by 16). The latter (which includes the read noise) has a Poissonian noise contribution of 16, or ten times the read noise. If we subtract the variance due to read noise from the total variance, and take the square root of the result, the photon noise and dark current (truly low) is 15.915. Thus read noise is negligible. 

Let us stick with the 256 electron/min signal. If we take 1 600s sub, we have a total noise of the sqrt(2560 +2.7225)=50.62 which means S/N of 50.57, whereas if we take 10 60s subs we have sqrt(2560 + 10x2.7225)=50.86 yielding S/N = 50.33.

Regarding the sampling, Nyquist suggests the 2.4 micron pixels are optimal at F/12, so this 80 mm is undersampling by a factor 2 at F/6, and by 2.5 at F/4.8.

Link to comment
Share on other sites

A very nice Jellyfish.  I don't think I've ever seen concentric repeating halos around the bright star like that before, very interesting.

Considering the moon phase I'd be tempted to just carry on with the Ha collection and come back to it for O3 under more favourable conditions? 

Link to comment
Share on other sites

6 minutes ago, CraigT82 said:

A very nice Jellyfish.  I don't think I've ever seen concentric repeating halos around the bright star like that before, very interesting.

Considering the moon phase I'd be tempted to just carry on with the Ha collection and come back to it for O3 under more favourable conditions? 

Thanks! The concentric rings are the results of different halos through different filters, combined with a lot of stretching. I must remove the stars before stretching, I suppose. I need to tackle those. What I did yesterday was first do a stint in H-alpha before it was perfectly dark, then switch to O-III as the moon had not yet risen, and the moment the moon was high enough to be a nuisance switch back to H-alpha. I was thinking of something similar tonight, as the moon rises later. A bit of a moot point now, given clouds have appeared, and show no sign of buzzing off. Maybe I will get a chance tomorrow. We will see

  • Like 1
Link to comment
Share on other sites

8 minutes ago, michael.h.f.wilkinson said:

The read noise of 1.65 electrons corresponds to the noise caused by 2.7225 thermal electrons (using Poissonian noise statistics).

Not sure why you bring in thermal noise into this. Dark current should be negligible at -20C that I assume you are cooling down to.

According to ZWO published data, dark current at -15°C is 0.00298e/px/s. In 60s that would amount to 0.1788e in one minute exposure - or ~0.4228e of dark current noise. About x4 less than read noise. You need at least 16 minutes subs in order for dark current noise to match read noise. You can't use dark current noise to swamp read noise.

10 minutes ago, michael.h.f.wilkinson said:

I am looking strictly at the noise in terms of electrons, so before any stretching by the ADC. The amplification by the gain setting does not alter the ratios of the noise contributions, it is just a linear multiplier.

You are right - but that means that ADU counts are divided with e/ADU values and if you want to get form ADU count to electrons - you need to do the inverse - multiply with e/ADU.

11 minutes ago, michael.h.f.wilkinson said:

The darkest sub, calibrated for dark current and flat-fielded has a background of about 4100 ADU counts (also without flats), or roughly 256 electrons (using your division by 16). The latter (which includes the read noise) has a Poissonian noise contribution of 16, or ten times the read noise.

Here is where you are wrong. That would be correct if you used unity gain with e/ADU value of one. However you used gain setting of 300 which has ~0.1259 e/ADU. Inspect your fits header for exact value, it should be reported.

From that math is straight forward. If you have 4100ADU count - and these are shifted by 4 bits because we are using 12bit ADC - actual value is ~256ADU and when we multiply that with e/ADU we get: 32.261875e as sky level in single exposure. Associated Poisson noise is square root of that or ~5.68. That is about x3.44 times read noise at gain 300. That is actually not that bad - but I'd still go with x5 factor - or twice as long exposure.

Having 0.54e/px/s sky signal equates to about mag 16.3. That is seriously poor sky - probably sqm18 + moon.

Link to comment
Share on other sites

Perhaps you could experiment with blending some Ha into the O3 channels for the HOO. Not sure what you use but APP is excellent for doing this quickly, with sliders for each channel to control the blending/combination. 

Link to comment
Share on other sites

20 minutes ago, michael.h.f.wilkinson said:

Regarding the sampling, Nyquist suggests the 2.4 micron pixels are optimal at F/12, so this 80 mm is undersampling by a factor 2 at F/6, and by 2.5 at F/4.8.

Nyquist for 2.4µm pixel size and Ha wavelength is F/7.32, but you can't use that as you are not in outer space. There is atmosphere and there is mount tracking performance.

Measure FWHM in your subs in arc seconds and multiply that with 1.6 - that will give you optimal sampling rate.

Link to comment
Share on other sites

6 minutes ago, vlaiv said:

Not sure why you bring in thermal noise into this. Dark current should be negligible at -20C that I assume you are cooling down to.

According to ZWO published data, dark current at -15°C is 0.00298e/px/s. In 60s that would amount to 0.1788e in one minute exposure - or ~0.4228e of dark current noise. About x4 less than read noise. You need at least 16 minutes subs in order for dark current noise to match read noise. You can't use dark current noise to swamp read noise.

You are right - but that means that ADU counts are divided with e/ADU values and if you want to get form ADU count to electrons - you need to do the inverse - multiply with e/ADU.

Here is where you are wrong. That would be correct if you used unity gain with e/ADU value of one. However you used gain setting of 300 which has ~0.1259 e/ADU. Inspect your fits header for exact value, it should be reported.

From that math is straight forward. If you have 4100ADU count - and these are shifted by 4 bits because we are using 12bit ADC - actual value is ~256ADU and when we multiply that with e/ADU we get: 32.261875e as sky level in single exposure. Associated Poisson noise is square root of that or ~5.68. That is about x3.44 times read noise at gain 300. That is actually not that bad - but I'd still go with x5 factor - or twice as long exposure.

Having 0.54e/px/s sky signal equates to about mag 16.3. That is seriously poor sky - probably sqm18 + moon.

I see the error in the forgetting the ADU count/e correction of 256. Note however that when talked about  the 1.65 e read noise I said it was equivalent to 2.7225 thermal electrons, I did not say the thermal noise was anywhere near that high (in fact, I stated that that is very low indeed). However, you should not compare standard deviations of noise sources linearly, as standard deviations do not add linearly, variances do. The variance in the signal is ten times that of the read noise, so the read noise makes up only 10% of the variance in the signal. Using the value reported in the FITS header of 0.11447 I get a signal = variance = 29.3, or a standard deviation of 5.41, which includes read noise. Subtracting the variance of the read noise we get a sky variance of 26.58 or noise level of 5.16 . So yes, there is a degradation, but not by a third. Going to the example of the 1x 600s vs 10x 60s exposure, we get a noise of 16.39 or S/N of 16.22 in the first case and noise level of 17.12 and S/N of 15.53 in the latter, or just 10 % worse S/N, requiring 21 % more data (not ideal, but doable, nothing like the 140% more subs you mentioned). Doubling the exposure time doesn't do much, and increases issues like tracking errors, requiring me to dump a lot of subs, which has a much greater impact on the total amount of useful data captured.

Regarding sky quality, the slightest haze immediately causes trouble, by reflecting city lights. On really good nights I get nice skies, but  humidity or dust in the air totally wrecks the views. I need to get to dark sky sites, but curfew doesn't allow, and I need to get guiding sorted, but MS-Windows doesn't allow access to the COM port. I have a third mount I am setting up drivers for, but I haven't had a long enough spell of decent nights to want to waste any on experiments. I just set-up stuff that works reliably.

 

3 minutes ago, vlaiv said:

Nyquist for 2.4µm pixel size and Ha wavelength is F/7.32, but you can't use that as you are not in outer space. There is atmosphere and there is mount tracking performance.

Measure FWHM in your subs in arc seconds and multiply that with 1.6 - that will give you optimal sampling rate.

I was thinking mid visual band, as used in lunar and planetary imaging, where anything between F/10 and F/12 is considered good for the ASI178 and ASI183. I do not want to vary sampling too much per wavelength. It's impractical. I do intend to see if my 0.6x reducer could be used with this sensor, as F/3.6 would be better. I tried once and got slightly eggy star shapes at the corners, so maybe the distance to the sensor is off.  The thing is, I don't want to waste rare clear nights faffing arround experimenting with spacers. If i get a stirng of clear nights, i might try this out.

Link to comment
Share on other sites

3 minutes ago, michael.h.f.wilkinson said:

I see the error in the forgetting the ADU count/e correction of 256. Note however that when talked about  the 1.65 e read noise I said it was equivalent to 2.7225 thermal electrons, I did not say the thermal noise was anywhere near that high (in fact, I stated that that is very low indeed). However, you should not compare standard deviations of noise sources linearly, as standard deviations do not add linearly, variances do. The variance in the signal is ten times that of the read noise, so the read noise makes up only 10% of the variance in the signal. Using the value reported in the FITS header of 0.11447 I get a signal = variance = 29.3, or a standard deviation of 5.41, which includes read noise. Subtracting the variance of the read noise we get a sky variance of 26.58 or noise level of 5.16 . So yes, there is a degradation, but not by a third. Going to the example of the 1x 600s vs 10x 60s exposure, we get a noise of 16.39 or S/N of 16.22 in the first case and noise level of 17.12 and S/N of 15.53 in the latter, or just 10 % worse S/N, requiring 21 % more data (not ideal, but doable, nothing like the 140% more subs you mentioned). Doubling the exposure time doesn't do much, and increases issues like tracking errors, requiring me to dump a lot of subs, which has a much greater impact on the total amount of useful data captured.

Yes, indeed, this is correct. I mentioned x10 based on values that I just guessed. Fact that you have sky background values from subs makes it possible to get exact numbers instead of guessing.

4 minutes ago, michael.h.f.wilkinson said:

I was thinking mid visual band, as used in lunar and planetary imaging, where anything between F/10 and F/12 is considered good for the ASI178 and ASI183. I do not want to vary sampling too much per wavelength. It's impractical. I do intend to see if my 0.6x reducer could be used with this sensor, as F/3.6 would be better. I tried once and got slightly eggy star shapes at the corners, so maybe the distance to the sensor is off.  The thing is, I don't want to waste rare clear nights faffing arround experimenting with spacers. If i get a stirng of clear nights, i might try this out.

Getting planetary sampling rate is straight forward. We just need to observe frequency cut off point for diffraction limiting case which is

image.png.00b4675d962ae21c5a1ff23019f137d3.png

https://en.wikipedia.org/wiki/Spatial_cutoff_frequency

We need to sample at twice that frequency (Nyquist). If we substitute pixel size in there - it is straight forward.

F# = pixel_size * 2 / wavelength

(two is due to Nyquist - two pixels per max frequency)

In case of 2.4µm and say visual light of 550nm = 0.55µm we get F# = 2.4 * 2 / 0.55 = ~8.73

F/12 corresponds to 400nm light wavelength as 2.4 * 2 / 0.4 = 12, so yes, if you want to get max detail possible - you should use F/12 but in practice one should not go that high since blue suffers atmospheric effects the worse and detail is probably not going to be there anyway.

Link to comment
Share on other sites

9 hours ago, vlaiv said:

Yes, indeed, this is correct. I mentioned x10 based on values that I just guessed. Fact that you have sky background values from subs makes it possible to get exact numbers instead of guessing.

Getting planetary sampling rate is straight forward. We just need to observe frequency cut off point for diffraction limiting case which is

image.png.00b4675d962ae21c5a1ff23019f137d3.png

https://en.wikipedia.org/wiki/Spatial_cutoff_frequency

We need to sample at twice that frequency (Nyquist). If we substitute pixel size in there - it is straight forward.

F# = pixel_size * 2 / wavelength

(two is due to Nyquist - two pixels per max frequency)

In case of 2.4µm and say visual light of 550nm = 0.55µm we get F# = 2.4 * 2 / 0.55 = ~8.73

F/12 corresponds to 400nm light wavelength as 2.4 * 2 / 0.4 = 12, so yes, if you want to get max detail possible - you should use F/12 but in practice one should not go that high since blue suffers atmospheric effects the worse and detail is probably not going to be there anyway.

Nyquist and Shannon sampling theorems assume point sampling. For square pixels, especially with high fill factor, a correction of about 10-20% is needed according to Van Vliet in his PhD thesis (somewhere early 90s, as I recall) 

Link to comment
Share on other sites

3 hours ago, michael.h.f.wilkinson said:

Nyquist and Shannon sampling theorems assume point sampling. For square pixels, especially with high fill factor, a correction of about 10-20% is needed according to Van Vliet in his PhD thesis (somewhere early 90s, as I recall) 

No need for correction.

Only difference between point sampling and relatively squareish pixels (not really perfect squares in real life) is that pixels convolve original function with shape of the pixel (if perfect squares - they would convolve original function with 2d rectangle / square) which acts as low pass filter but does not change cut off frequency.

Perfect square for example attenuates frequency down to about 70% of original at cut off frequency if pixel is matched as above calculations say.

Here is FFT of perfect square pixel:

image.png.ac6b765f5a74362214cc9194166bd0a9.png

That represents low pass filter that is acting on original signal.

image.png.a97c8c3209a179fb4b6c4f356e914495.png

I roughly marked cut off frequency of the signal due to diffraction at scope aperture and area where signal is contained. As you can see - this low pass filter does not change cut off frequency - just attenuates frequencies below it a bit more.

Link to comment
Share on other sites

11 minutes ago, vlaiv said:

No need for correction.

Only difference between point sampling and relatively squareish pixels (not really perfect squares in real life) is that pixels convolve original function with shape of the pixel (if perfect squares - they would convolve original function with 2d rectangle / square) which acts as low pass filter but does not change cut off frequency.

Perfect square for example attenuates frequency down to about 70% of original at cut off frequency if pixel is matched as above calculations say.

Here is FFT of perfect square pixel:

image.png.ac6b765f5a74362214cc9194166bd0a9.png

That represents low pass filter that is acting on original signal.

image.png.a97c8c3209a179fb4b6c4f356e914495.png

I roughly marked cut off frequency of the signal due to diffraction at scope aperture and area where signal is contained. As you can see - this low pass filter does not change cut off frequency - just attenuates frequencies below it a bit more.

Well, I will dig up his thesis one of these days to check his results

Link to comment
Share on other sites

2 minutes ago, vlaiv said:

Check out this paper for example:

https://www.radioeng.cz/fulltexts/2004/04_04_27_34.pdf

I am fully up to speed with convolution (teach it in my Computer Vision class). If you blur any psf with a square the FWHM goes up, so conversely the spread of frequencies in the Fourier domain goes down. So the higher frequencies get drowned in noise earlier. This is just basic mathematics. The experience of most planetary imagers is that slight oversampling gets out more detail

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.