Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

Imaging Galaxies - get rid of 400/500 fl OTAs???


iapa

Recommended Posts

As an imager with a preference to imaging galaxies and little results to date , I am reconsidering ownership of my 400/500 mm OTAs (RASA8",Equinox80ED Pro, Esprit ED80 Pro) and just using 8" SCT & RC along with the 8" & 10" 1000mm reflectors.

Looking for advice as to whether this is a reasonable position to take 

Edited by iapa
Link to comment
Share on other sites

  • iapa changed the title to Imaging Galaxies - get rid of 400/500 fl OTAs???

My experience says that you need more than 500mm and not more than 1000mm now that pixels are so small. You also need excellent resolution, so forget the RASA. I use one and like it very much but not for small galaxies. Provided collimation is good I don't see the 1000mm Newt doing a bad job. Personally I use a TEC 140 for 1000mm but that's an expensive choice and requires long exposure times.

Olly

Link to comment
Share on other sites

Uneducated response with little experience, so take this with a pinch of salt. But I suppose if you just wanted a good all in one, after selling all that gear, you could get an EdgeHD with a hyperstar and reducer. You've got the choice of F10, F7 and F2 then.

Might be wrong, someone correct me if I am.

Link to comment
Share on other sites

7 minutes ago, Grant93 said:

Uneducated response with little experience, so take this with a pinch of salt. But I suppose if you just wanted a good all in one, after selling all that gear, you could get an EdgeHD with a hyperstar and reducer. You've got the choice of F10, F7 and F2 then.

Might be wrong, someone correct me if I am.

I'm afraid this is wrong. The Hyperstar advertizing is disgracefully misleading because it gives the impression that, in going from F10 to F2, you get the same image in a fraction of the time. You don't. You get a tiny image in a fraction of the time if you're shooting a small target. This is because the focal length is reduced in order to speed up the F ratio. To get the same image in a fraction of the time you would need to keep the focal length as it is and increase the aperture. This discussion comes under the famously controversial title of The F Ratio Myth. Go there at your peril!

Galaxy imaging: first you need to decide on an image scale in arcsecs per pixel. Say 1.2 or 1.5 or whatever. This is determined by FL and pixel size. Then, with both of those fixed, you can think about aperture. If you only think about F ratio you'll go up a gum tree!

:Dlly

  • Like 4
Link to comment
Share on other sites

Maybe best thing is not to think in terms of focal length, but instead to think in terms of achievable resolution?

Set yourself a simple goal - create image at 1.5"/px that has 2.4" FWHM stars for example (or with any other resolution for that matter - it is not 1.5"/px that counts but rather 2.4" star FWHM).

When you manage that - think of going to a higher resolution setup.

By the way, you need only ~515mm of focal length with 3.75um pixel sized camera to image at 1.5"/px.

 

Link to comment
Share on other sites

6 minutes ago, ollypenrice said:

I'm afraid this is wrong. The Hyperstar advertizing is disgracefully misleading because it gives the impression that, in going from F10 to F2, you get the same image in a fraction of the time. You don't. You get a tiny image in a fraction of the time if you're shooting a small target. This is because the focal length is reduced in order to speed up the F ratio. To get the same image in a fraction of the time you would need to keep the focal length as it is and increase the aperture. This discussion comes under the famously controversial title of The F Ratio Myth. Go there at your peril!

Galaxy imaging: first you need to decide on an image scale in arcsecs per pixel. Say 1.2 or 1.5 or whatever. This is determined by FL and pixel size. Then, with both of those fixed, you can think about aperture. If you only think about F ratio you'll go up a gum tree!

:Dlly

Sorry I should've mentioned, I understand that as you lower the the F ratio in the EdgeHD via reducer or Hyperstar, you also reduce the focal length, so the image you will be taking will become widerfield. The impression I get from Hyperstar advertisements, is that if you add it to an EdgeHD, it effectively becomes a RASA, widefield and super fast light collecting, is this wrong?

My point was, if he got all 3, he would have a choice on whether he wanted upclose and F10, or widefield F2, or something inbetween. Not arguing - Trying to clarify and make sure I understand it also :D :D

Link to comment
Share on other sites

43 minutes ago, Grant93 said:

Sorry I should've mentioned, I understand that as you lower the the F ratio in the EdgeHD via reducer or Hyperstar, you also reduce the focal length, so the image you will be taking will become widerfield. The impression I get from Hyperstar advertisements, is that if you add it to an EdgeHD, it effectively becomes a RASA, widefield and super fast light collecting, is this wrong?

My point was, if he got all 3, he would have a choice on whether he wanted upclose and F10, or widefield F2, or something inbetween. Not arguing - Trying to clarify and make sure I understand it also :D :D

Yes, it becomes like a RASA, though the RASA is purpose-designed from the off and performs better. The problem is that, arguably in this world of small pixels, the Edge has too much focal length, more than can be exploited on deep sky. You can bin down to a more workable resolution and gain speed but at a high cost in field of view.

58 minutes ago, vlaiv said:

 

By the way, you need only ~515mm of focal length with 3.75um pixel sized camera to image at 1.5"/px.

 

I agree with your main points but If the 515mm is done with a small refractor then the optical resolution (Dawes limit) will be less than that from a larger refractor or Newt at 1000mm.  The Tak 105/TEC140 Dawes limits are 0.83/1.09 arcsecs respectively. What's your view on the relationship between real image resolution and optical resolution? Or, to put it another way, how much optical resolution do you think is needed to support, say, the 1.5 arcsecs you mention?

Olly

  • Thanks 1
Link to comment
Share on other sites

7 minutes ago, ollypenrice said:

I agree with your main points but If the 515mm is done with a small refractor then the optical resolution (Dawes limit) will be less than that from a larger refractor or Newt at 1000mm.  The Tak 105/TEC140 Dawes limits are 0.83/1.09 arcsecs respectively. What's your view on the relationship between real image resolution and optical resolution? Or, to put it another way, how much optical resolution do you think is needed to support, say, the 1.5 arcsecs you mention?

Olly

Attainable resolution will depend on Airy disk size (for diffraction limited scopes) or spot diagram RMS, guiding RMS and seeing FWHM.

100mm diffraction limited scope (over whole spectrum imaged) with 0.6" guide RMS and 1.5" FWHM seeing will produce 2.4" FWHM result.

In any case - math is as follows:

FWHM = 2.355 * sigma (RMS)

RMS total = square root ( RMS1^2 + RMS2^2 + RMS3^2 ....)

Guide RMS is as is

Airy disk to RMS goes like arcsin(0.42 * lambda / diameter) (this returns value in radians, both wavelength of light and diameter need to be in same units - be that mm or um or inches or whatever). Alternatively you can use spot diagram RMS if you have optics that is not diffraction limited.

seeing RMS is seeing FWHM / 2.355

I would recommend using at least 6" of aperture when aiming for 2.4" FWHM stars as that leaves some room for guide RMS and seeing.

6" needs 1.6" FWHM seeing and 0.7" RMS guiding for 2.4" FWHM

By the way, without seeing and guiding/tracking influence, 100mm of aperture can happily resolve for ~0.52"/px imaging although Dawes limit is 1.16" for 100mm scope (Dawes limit is not particularly useful measure for imaging).

 

 

  • Like 1
  • Thanks 1
Link to comment
Share on other sites

To get "up close and personal" with galaxies requires longer focal lengths.  With today's cameras you will be over sampled and bad seeing is the thing that will bite you the most often.

This image of NGC4565 was made with an ASI-533MCp and my 2000 mm FL AT10RC "Galaxy Cannon" at .39" / pixel

Don't let all this talk stop you...... just get out an do it.  (^8

 

NGC4565 J Love.jpg

10 inch Galaxy canon-2-sm.JPG

Edited by CCD-Freak
  • Like 5
Link to comment
Share on other sites

23 minutes ago, CCD-Freak said:

This image of NGC4565 was made with an ASI-533MCp and my 2000 mm FL AT10RC "Galaxy Cannon" at .39" / pixel

Interestingly enough - image that you linked shows galaxy ~740px long.

Needle has about 14.5' in width - which is 870 arc seconds.

870" / 740px = ~1.17"/px

Far cry from 0.39"/px

 

  • Like 1
Link to comment
Share on other sites

Good math.  (^8

The original 3008 x 3008 image was resampled down to 750 x 750 to  make it a reasonable size for posting.

2000 mm FL with 3.76u pixels yields .39" / pixel

 

 

Edited by CCD-Freak
  • Like 1
Link to comment
Share on other sites

9 hours ago, CCD-Freak said:

Good math.  (^8

The original 3008 x 3008 image was resampled down to 750 x 750 to  make it a reasonable size for posting.

2000 mm FL with 3.76u pixels yields .39" / pixel

 

 

In that case how about posting a crop at the full resolution? A downsampled image doesn't support your argument in favour of  0.39"PP over a much coarser sampling rate. If you really can resolve at that sampling rate I'll be the first to congratulate you (sincerely) but, when I imaged at about 0.62"PP, I found I could not present images at anywhere near full size. I was in a world of 'empty resolution,' if you like. I since went from a FL of 2.4 metres down to one of 1.0 metre using a camera giving just below 1"PP and got results I could crop and view at full size.  Even then, though, Vlaiv provided a convincing demonstration that I was still over-sampled: he resampled the image downwards and then back up again. The result was the same as the original.

That's a very crisp image indeed, for sure, but the debate is about whether or not you really needed 0.39"PP to achieve it.

Some galaxy images from a 1m focal length:

1922050263_M101HSTcolour30Hrcropweb.jpg.b1563121aeee35ea1f0bf0a79287550e.jpg

 

1329087410_2020FIN.thumb.jpg.246c875397650e5b5cca81d0f3128fb0.jpg

Basically I'm not convinced that, with present-day amateur cameras, we still need very long focal lengths.

Olly

  • Like 5
Link to comment
Share on other sites

2 hours ago, ollypenrice said:

Basically I'm not convinced that, with present-day amateur cameras, we still need very long focal lengths.

I agree with this and would like to add the following:

While we don't need very long focal length scopes - there is nothing wrong with using them as long as you pay attention to few things.

I often hear that it is much harder to image or guide at say 2m of focal length. It is not. Mount error and achieved resolution really don't care about focal length you are using (size of telescope and its length will have some impact, but not focal length). As long as you tie yourself to sampling rate rather than focal length - you will be fine.

There is no difference when imaging at 1.5"/px - if you are working with 500mm, 1000mm or 2m focal length.

In the end - long focal length scope allow for larger aperture and hence more light collecting power while still being optically "easy" thanks to their slower F/ratio.

  • Like 1
Link to comment
Share on other sites

I have used my Esprit 100 @F5.5 to image galaxies at 1.4 arcseconds per pixel and I am not convinced I need a longer focal length. But I would say that maintaining the same kind of image scale with a bigger pixel camera / larger aperture scope would have benefits in terms of image quality if not resolution. Being able to image faster means that you have more ability to be selective about using data from only the best nights of seeing. There are some locations where you will get better seeing than available where I am in the UK though so I am not saying that you should consider 1.4 arc-seconds per pixel to be a hard deck just that you need to be realistic. 

Adam

Link to comment
Share on other sites

A lot of sensible things have already been said. Resolution wise getting the resolution per pixel as afforded by your local seeing.  Within that resolution, aperture gets you more light, so always good. A lot of galaxies are easy to image with not too much exposure time to get a pretty picture. It needs a lot more exposure time to image things like stellar streams from galaxies. 

A bit off topic, but one area where long focal lengths / oversampling can be rewarding is in the realm of lucky imaging. This is an intriguing line of imaging with a lot to be discovered. As exposure times are really short (equal or less than 0.1 second or so), one needs a lot of light gathering capability and / or very bright objects to make this work. There are very impressive examples (e.g. exaxe on astrobin). 

 

Edited by Annehouw
  • Like 1
Link to comment
Share on other sites

3 minutes ago, Annehouw said:

A bit off topic, but one area where long focal lengths / oversampling can be rewarding is in the realm of lucky imaging.

Oversampling is never rewarding :D

What you are really saying is that lucky imaging reduces impacts of atmosphere and tracking errors and therefore higher sampling rates will be optimum in comparison with long exposure.

 

Link to comment
Share on other sites

17 minutes ago, Annehouw said:

Haha, you got me there. 🙂

Said otherwise: with lucky imaging you sample for the resolving power of the telescope instead of sampling for local seeing. 

Well - almost.

I don't think we can ever beat the seeing in lucky DSO imaging as we do with lucky planetary imaging.

There are two fundamental problems with lucky DSO imaging. First is exposure time.

To make stacking feasible with lucky DSO imaging - we need to use exposure order of magnitude of second - maybe one magnitude less than that - so 0.1s. That is still x20-x200 times longer than exposures needed to freeze the seeing - ones used in planetary imaging - which are order of 5ms.

Second is size of FOV and alignment issues.

Modern lucky planetary imaging software can choose to stack only part of the frame - one that is sharp / undistorted enough. Larger the FOV - more variance in seeing there will be across it. If you look at adaptive optic systems - they tend to operate on very small patches of the sky as wavefront deformation that adaptive optics corrects - is present only over small patch of the sky - like few arc seconds across.

With lucky DSO imaging - we only have bright stars to align against. Shorter the exposure - less stars to choose from with sufficient SNR to provide us with good alignment. Even if we have enough alignment stars - we can only keep / reject subs based on quality of their star images. That tells us nothing about wavefront deformation between those stars where object of interest lies.

Bottom line is - with planetary lucky imaging and very short exposures we can almost fully beat the seeing and exploit full resolving power of optics. With lucky DSO imaging - we simply lessen the impact of atmosphere but we can't really beat it in the same way we can with lucky planetary imaging. Effective resolving power of the approach lies somewhere between long exposure and lucky planetary imaging.

We can beat the seeing for say, star systems - like double/triple stars that we want to split down to resolving power or telescope. This is because these provide enough signal much like in planetary so we can keep exposure even shorter and because these occupy small enough patch of the sky so that we can be pretty confident that PSF we see on single stars is the same PSF over the imaging field and that we can reject / keep frames based on that alone.

  • Like 3
Link to comment
Share on other sites

Lucky DSO imaging has a lot of challenges. That's why it is fun 🙂

Let me address a few of the ones you mentioned. In popular parlance, lucky imaging is sometimes used for exposure times below 5s or so. That does little to battle the turbulence, but it does do a bit in the way of lucky guiding. Fot beating the turbulence, we need to be somewhere between 0.1s and 0.01s. I'll link some interesting papers below.

- Alignment is not a problem in my experience. As with planetary images (where there is not a single star in sight to align on), modern software for this purpose does a fine job in quality selecting and stacking. I used SIRIL in my own experiments and that works well.

- What is a problem with increasingly shorter exposures is read noise. For normal DSO imaging read noise in modern camera's is now so low that it has not a lot of influence. But when stacking thousands of very short exposures, this becomes an issue. This in practice limits the shortest  usable exposure time. There are very expensive sensors like emCCD with very low read noise, but these are prohibitively expensive. Having said that, technology marches on and we will see lower read noise camera's for mere mortals in the future.

One more thing: Inherently in Lucky Imaging, you throw away data. And, depending on one's criterion for sharpness, this can be a lot. So this is not a technique for faint fuzzies.

 

Now some links for some interesting read work:

Thesis: Lucky Imaging - beyond binary stars This is from 2013 and a bit dated by now, but it is a nice read.

Lucky Imaging: High Angular Resolution Imaging in the Visible from the Ground This one is more recent. and a lot shorter than the thesis above. They report their findings at 12Hz 

Edited by Annehouw
  • Like 1
Link to comment
Share on other sites

3 hours ago, Annehouw said:

Fot beating the turbulence, we need to be somewhere between 0.1s and 0.01s. I'll link some interesting papers below.

Quite right - even shorter time is needed for visible part of spectrum. 10ms might be enough for 900nm of I filter but it's not going to be enough for 500nm of V filter (or other visible filters for that matter).

3 hours ago, Annehouw said:

Alignment is not a problem in my experience. As with planetary images (where there is not a single star in sight to align on), modern software for this purpose does a fine job in quality selecting and stacking. I used SIRIL in my own experiments and that works well.

Care to share some of your high resolution work?

In second paper you linked - they managed to get 0.26" FWHM which equates to something like 0.1625"/px on 2.5 meter telescope that has diffraction limited resolution of ~0.0372"/px at 900nm (max wavelength of I band) or about x4.376 higher sampling rate.

They themselves conclude that effective Strehl of such system is in range of 0.15-0.2 and that they image area of about 20" x  20"

That is with telescope that gathers x100 more light than 10" amateurs consider a large telescope :D

Link to comment
Share on other sites

Hi @vlaiv

"Care to share some of your high resolution work?" I am afraid that will have to wait for a while. I am still in the early experimentation stage with no results to share for now. 

As an alternative, I can direct you to a person who I see as the current Grand Master in this genre: Stephane Gonzales aka exaxe on astrobin: Exaxe: NGC 7027 

There is a lot to his skills. From this image you can see in the description section that he used an imaging scale of 0.17 arcsecond per pixel; the original reason I joined this conversation. Also note that his shortest exposures are 0.15s. So in terms of freezing of turbulence still room for growth. Even so, his results are spectacular (in my eyes at least). He has been perfecting his techniques for years and this image, as you can see from the description section, is quite elaborate in the way it was captured.

My takeaway from this is that DSO Lucky Imaging has a lot of yet to be explored potential for amateurs as well. 

P.S. Stephane has a dedicated website with a bit more on his techniques: Astrophotographie avec la technique du lucky imaging  Enjoy!

 

Sorry to the @iapa for going off-topic on this thread

 

C.S. Anne

Edited by Annehouw
  • Like 1
Link to comment
Share on other sites

@Annehouw

It is indeed very nice image, but I have to say that I'm rather confused with the detail in description. It mentions IMX 290 based sensor which has 1920x1080 resolution and image itself is 3373x1658, so I'm guessing it is mosaic?

Furthermore - 300mm F/4 scope with x3 barlow would give 3600mm of FL and with 2.9um pixel size - that is, like you mentioned 0.166"/px.

Yet image is 0.093"/px when measured on the image itself. This sort of corresponds to increase given by ratio of widths of sensor and image? It's not mosaic after all?

3373 / 1920 = ~1.75 and 0.17 / 0.093 = ~1.78

Why would image be enlarged by x1.75 when F/12 is critical sampling for 2.9um pixel size in green? There is no sense to do that.

Maybe we should examine if image indeed resolves things?

image.png.e856f14383a09b86f60ade8b32f35819.png

Let's concentrate on that shell bit I outlined. In image it looks like single prong with a small bit to the right.

How about fully resolved image at this resolution?

image.png.50eba37cd4102f949daefcc148ae50a9.png

world of difference ...

Now, don't get me wrong - it is excellent image, and far better than can be obtained with long exposure - but it is far cry from resolving at sampling rate telescope aperture alone would suggest.

In my view - this is the scale at which image starts to be close to fully resolved:

image.png.0f0034f8b039da6b81971563d46fbf55.png

left is original image and right is reference. Even at this scale - reference looks a bit sharper than original, and this is about 0.7"/px

Extraordinary resolution by the way - and better than anything possible with long exposure and amateur equipment, but again - not near to what 12" scope can resolve without influence of atmosphere.

Link to comment
Share on other sites

@vlaiv

"Extraordinary resolution by the way - and better than anything possible with long exposure and amateur equipment, but again - not near to what 12" scope can resolve without influence of atmosphere."

Agreed. 

There is room for growth in this technique. And that is the point I am trying to make. That with the current and upcoming sensors with small pixels, high sensitivity and low read noise there is a new frontier in high resolution imaging beyond the air turbulence barrier. How much beyond that barrier? I do not know but, like in the old days, not being afraid to fall off the flat earth opens whole new worlds. 😉

And for those still with us in this thread, a nice write up on the post processing side of this technique: Traitement des images en mode courtes poses  It is in french, but with a lot of pictures and "astro-franglais" words.

 

P.S: As to your questions on image scale/mosaic-non mosaic..I do not know. There is a bit more of behind the scenes here: http://astrophoto17.eklablog.com/recent/5 but not enough to answer your questions.

 

CS Anne

  • Like 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.