Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

Next step up from stock mirrorless camera


Recommended Posts

Hi guys

I'm after some advice on what I should be looking for when I decided to get something a bit more dedicated to astrophotography please.

I currently shoot with an A7Rii which seems to have given me some decent results so far, not amazing, but pleasing to my eyes. Typically I'll be taking pictures of DSOs, I have plans for things like the whirlpool galaxy, the crab nebula, rosette nebula, etc. I'm using the Skywatcher Evostar 80ED on the EQ3 mount with a flattener/reducer.

How important is pixel count or sensor size? Currently I'm shooting at 42mp full frame which lets me crop in a fair bit, but the pixels aren't the biggest. I've seen on eBay a used QHY8L which is 9mp (I think) and APS-C so much bigger pixels but much lower resolution. I understand that I'd gain better Ha sensitivity and sensitivity in general with it being actively cooled. Would this be a decent next step?

Thank you for any answers

Will

Link to comment
Share on other sites

Both sensor size and pixel size are very important considerations - as long as you match them to the scope.

For example - ED80 will not cover full frame sensor and stars in the corners will not look good - as you no doubt already know.

I think that it is best to think in following terms when deciding on what sort of camera + scope combination you want to use.

First determine what will be your working resolution / sampling rate. For example - if you want to go wide field - choose something in range 2.5"/px to 3"/px. If you want medium to wide images - choose 2"/px. If you want to do general/medium working resolution - that would be 1.5"/px. You should not work with higher sampling rates if you don't have very good mount, larger scope (6" or larger) and can guide very well.

Next step would be to see how you can achieve such resolution and what would be your field of view. Here sensor size and pixel size play major part along with selection of telescope.

Larger sensors have advantage as they will be faster for given sampling rate. This is because you can use larger telescope and bin your pixels to make them larger. Larger telescope means more aperture - more photons gathered on wanted sampling rate.

Pixel size gives you "grain" in which you can change resolution. Large pixels means that you'll have to use them as they are or possibly bin x2. Small pixels give you more flexibility as you can use them natively or bin x2, bin x3, bin x4, etc ...

For example - if you have 6.45µm pixel size then you have that or possibly 12.9µm if you bin x2. If you on the other hand have something like 2.9µm pixel size, then you have 2.9µm, 5.8µm, 8.7µm, 11.6µm - native, bin x2, bin x3, bin x4. This means flexibility in choosing the imaging scope.

QHY8L has 7.8µm pixel size and with ED80 + FF/FR it will sample at 3.15"/px - so you are in wide field mode and there is no way to change mode.

But same camera and scope with 2000mm of focal length will offer 1.61"/px, 2.41"/px and 3.22"/px - so all three options, depending on how you choose to bin - except, FOV will be much smaller than with ED80.

In the end - it is a balance and it will depend on what you choose as your working resolution. Do bear in mind that EQ3 mount is inadequate for anything but very low res / wide field work.

As for active cooling - cooling is important, but more important is set point temperature as it enables you to do proper calibration with precise calibration files. That is much more important than actual "sub zero temperature", so yes - if you can go set point cooling - just keep in mind, dedicated astronomy cameras with cooled sensor can be quite expensive.

If you like QHY8L, then have a look at these:

https://www.firstlightoptics.com/zwo-cameras/zwo-asi071mc-pro-usb-30-cooled-colour-camera.html

or

https://www.firstlightoptics.com/zwo-cameras/zwo-asi-2600mc-pro-usb-30-cooled-colour-camera.html

Just a question - why don't you continue to work with that full frame mirrorless - look into binning and maybe put funds towards better mount instead?

This looks like very decent replacement mount:

https://www.firstlightoptics.com/ioptron-mounts/ioptron-cem26-center-balanced-equatorial-goto-mount.html

 

Link to comment
Share on other sites

Thank you so much for the really detailed reply, you've definitely been a great help. 

Yeah I've definitely noticed the lack of coverage on full frame, although since getting an m48 adapter it's not actually too bad with the flattener, I get a little vignetting I'm the corners which the flats and a bit of cropping fixes.

I'm thinking more general, rather than wide field in terms of what I'm looking at imaging. The 80ED is a bit of a compromise on price, and the space I have available to store it. 

I'm a bit tired so I'm having a bit of trouble understanding about binning, sorry. Am I understanding it right that if I bin the pixels I'm trading resolution (say 2" to 3") for better light gathering? I didn't know that I could do binning with something like my A7Rii, I'm guessing I would need a program for that? I've had a look and it looks as though it's got a 4.5µm pixel size, how would I calculate the effective working range for that?

Yeah the EQ3 isn't a fantastic, but it's done me OK so far, I've managed to get 2' exposures with no real noticeable drift and captured a few frames of the crab the other night before clouds came over. It was a bit sticky and stiff, bit since taking it apart and tuning it it feels much better and I'm hoping to try it out tomorrow and Thursday, fingers crossed. A better mount is definitely on the cards at some point though.

I think I'll take your advice and keeping going with my Sony for now and have a look into binning. Thank you again for your detailed reply

Will

 

Link to comment
Share on other sites

Yes, binning is effectively like having larger pixel. You take all electrons captured by 4 adjacent pixels - group of 2x2 pixels and sum them.

Resolution / sampling rate goes down by factor of 2 but SNR improves by factor of 2. It is trade of between resolution and image quality. In fact - if we over sample - pixels too small for our scope, mount and skies - you don't loose any details.

CCD sensors can bin in hardware and both CMOS and CCD sensors can bin in software. Difference is in how much read noise ends up in the image - with CCDs and hardware binning - summing is done in silicon and there is only one read noise applied. With software binning - summing is done after read out and each pixel gets its read noise so overall read noise raises by factor of 2 (or rather - bin factor - if you bin 3x3 - you get x3 read noise for resulting large pixels).

CMOS sensors have much lower read noise than CCD so that is not a big deal. Average CMOS has about 2e of read noise, while CCD has about 8e - so you can easily bin x2, x3 or even x4 and still have smaller or same read noise as average CCD.

For binning - you can use any software that is capable of doing it - PI has it as integer resample, StarTools has option to bin data in processing stage after stacking. ImageJ can bin subs before you stack them while still being raw and so on ...

Sampling rate is calculate by this formula:

image.png.1b9672a34c72f1d6d840253ecb714657.png

and you can actually access handy calculator for that here:

http://www.wilmslowastro.com/software/formulae.htm#ARCSEC_PIXEL

So your 80ED with 4.5µm pixel size is sampling at 1.82"/px - which is rather good general purpose resolution.

What do your images look like when viewed at 100% zoom level (1:1 - or 1 image pixel to 1 screen pixel)? If you still have nice round stars than your mount is doing a good job.

Link to comment
Share on other sites

Ah right, I understand. Thank you for the detailed explanation. I'll have a look into some programs for binning and see where it takes me, I'll see if maybe there's a way I can do it in affinity photo as that is what I have been using so far for stacking and processing. I know narrowband filters on a standard DSLR are pretty much a no go, but could I use binning with something like an Ha filter or dual/tri/quad band filter to increase the sensitivity and then combine that with the RGB image? I've heard about HaRGB composites and wondering if binning could offset the loss of photons when using narrowband?

I've included a screen shot of one of my subs at 100%, looking at it that zoomed it doesn't actually look all that great. Typically though I don't push that mount much past 60" exposures, 120" was pushing it a bit too far. Also I did polar alignment before attaching my scope because I was rushing because of the impending clouds so that probably didn't help things. The second image is a more typical 60" exposure. These are all aligned by eye and tracked using the single Skywatcher RA motor

Hopefully with your insight I may be able to get some better results next time I'm imaging. 

 

image.png

image.png

Link to comment
Share on other sites

From a quick search - affinity photo does not seem to have pixel binning.

You can use Ha filter with color sensor - but need to be aware of few things. It will have only 1/4 sensitivity as only red pixel from CFA (bayer matrix) is sensitive in Ha wavelengths. Proper way to utilize Ha would be to make shoot with that filter and after stacking - just extract red channel and throw away blue and green.

Also - make sure you use RAW data and not color processed data in any way - camera and software do color balancing and you don't want any color balancing. DCRaw can extract pure raw data from raw files. That is what you want for Ha image. Once you have that - you can bin your image like regular image and in fact you should - in order to make them same resolution so you can properly align them.

Alternative would be to go with UHC filter instead - which is cheap version of quad band filter. It will pass Ha, Hb, OIII and SII wavelengths and a bit more. It is not as effective as regular narrowband filters but it is effective in light pollution on emission type targets.

Do consider upgrading mount and adding guiding.

Link to comment
Share on other sites

Yeah I couldn't find anything either. I'll have a look into some of the other programs though.

That was something that I had some across when I had been having a look at some Ha filters that I would only have ¼ of the sensitivity due to the Bayer matrix and the wavelength of Ha, and am I right in understanding that the IR cut filter in the camera reduces it further?

So if I were to go that route I'd extract that colour data before stacking? I think that I can do that in the affinity photo developer, I might even be able to make a macro to do a batch all in 1 go, although I may have to export into a 16bit TIF. I think affinity now has star alignment which may help. 

I've looked at things like UHC filters and considered them. Namely I've been looking at the Baader UHC, Astronomik UHC and the Optolong L-Enhance which all have varying passthrough. I currently have the Baader neodymium moon and sky glow filter, as it was something i had experience with when I was shooting with my lenses.

A mount upgrade is definitely on the cards and I've considered guiding, I'll have to get saving.

Thanks for all your help and advice, and taking the time to reply. Is there any recommended reading so I can go away and research things a bit further?

Link to comment
Share on other sites

46 minutes ago, Chimer4 said:

That was something that I had some across when I had been having a look at some Ha filters that I would only have ¼ of the sensitivity due to the Bayer matrix and the wavelength of Ha, and am I right in understanding that the IR cut filter in the camera reduces it further?

Yes. Actually it will depend on IR cut filter - but most cameras have sloped IR cut filter rather than sharp edged one. This provides better color balance.

This image sums it nicely:

Nikon-D5100-sensor-transmission-profile.

It shows that IR cut filter reduces Ha sensitivity to 20% of what it is normally in particular sensor.

48 minutes ago, Chimer4 said:

So if I were to go that route I'd extract that colour data before stacking? I think that I can do that in the affinity photo developer, I might even be able to make a macro to do a batch all in 1 go, although I may have to export into a 16bit TIF. I think affinity now has star alignment which may help. 

If you can - try DCRaw software. It is command line utility and can extract completely raw sensor data. That is what you need. Mind you - such raw data will be very strange looking - like all green or similar as no color correction is applied to it.

Thing with color correction is that it uses matrix multiplication so that final red contains all three raw components - raw blue, raw green and raw red. Problem with that is that green and blue raw components only contain noise really - and no signal. So you end up injecting that noise into Ha signal for no valid reason if you let software treat data like regular photo and tries to do color processing.

In any case - use dcraw to get proper raw data from your camera and then compare to what you get in affinity photo if you tell it to leave image as raw data. If you get the same thing - then great. If not - I suggest that you use some software that can extract real sensor raw data from your images for Ha.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.