Jump to content

sgl_imaging_challenge_banner_31.thumb.jpg.b7a41d6a0fa4e315f57ea3e240acf140.jpg

12bit vs 14bit DSLRs


Recommended Posts

Hi All,

I was wondering if anyone has any experience comparing DSLRs with 12bit pixel depth like canon 700D vs 14bit like Canon 800D. Might it be worth upgrading your existing 12bit model for a 14bit model. Also what about the question of a cooled 12bit CCD like the ZWO ASI1600 compared to a 14bit DSLR?

Cheers!

 

  • Like 1
Link to post
Share on other sites
  • Replies 34
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

Sorry I just noticed that I missed answering your main question. The four 12-bit equal to one 14 bit image is not technically true although it is trueish in many circumstances. It's one of those commo

Not necessarily. When it comes to proper astro cameras that have RAW formats that are actually raw, the pixel value is simply a count of the electrons in the pixel electron well (1 electron per photon

Btw, I put Stark's article to the test and did a sequence in PixInsight, creating and stacking images. But I only simulated a 3 bit camera, stacking 32 images. Here's the result: http://wimv

When I went from a Canon 10D to a 450D, which is 12 to 14bit, the difference was huge. As the 450D was less noisy than the 10D it was basically like getting four times the data in the same time.

A bit different with cooled CCDs as they are very low noise, and the camera may be mono, making a direct comparison difficult.

Link to post
Share on other sites

I have used a 12bit Canon 1000d, a modded 550d, both 12 bit.  Currently I'm using a Canon 6D and 80D, both 14 bit.  It is pretty difficult to make a comparison based on bit depth because cmos chip technology is constantly evolving with increasing mega pixels, better sensitivity and lower noise before you bring bit depth into the equation.  

When I say I've used these cameras I'm meaning widefield landscape night time photography.  In this situation the images aren't majorly underexposed.  I've found that my 12 bit DSLRs images would cope pretty well being when stretched from around 3 stops underexposed.  However, with deep sky, much more image stretching is often needed and you have to feel that 14bit would offer a significant advantage.  The latest CMOS cameras seem to be breaking the rules though, they have a very low read noise and this means you don't need such long exposure to overcome read noise.  Large numbers of short sub exposures appear to be order of the day with the use of gain (equivalent to high iso) enabling the subs to not need as much stretching.  In this situation bit depth may not be too critical.  

There are lots of great images on SGL from ASI1600 users and these will be far more eloquent than anything I can say!!

  • Like 1
Link to post
Share on other sites
4 hours ago, MartinB said:

I have used a 12bit Canon 1000d, a modded 550d, both 12 bit.  Currently I'm using a Canon 6D and 80D, both 14 bit.  It is pretty difficult to make a comparison based on bit depth because cmos chip technology is constantly evolving with increasing mega pixels, better sensitivity and lower noise before you bring bit depth into the equation.  

When I say I've used these cameras I'm meaning widefield landscape night time photography.  In this situation the images aren't majorly underexposed.  I've found that my 12 bit DSLRs images would cope pretty well being when stretched from around 3 stops underexposed.  However, with deep sky, much more image stretching is often needed and you have to feel that 14bit would offer a significant advantage.  The latest CMOS cameras seem to be breaking the rules though, they have a very low read noise and this means you don't need such long exposure to overcome read noise.  Large numbers of short sub exposures appear to be order of the day with the use of gain (equivalent to high iso) enabling the subs to not need as much stretching.  In this situation bit depth may not be too critical.  

There are lots of great images on SGL from ASI1600 users and these will be far more eloquent than anything I can say!!

I am 100% certain that the 550D is 14bit.

Link to post
Share on other sites
5 hours ago, beka said:

Hi All,

I was wondering if anyone has any experience comparing DSLRs with 12bit pixel depth like canon 700D vs 14bit like Canon 800D. Might it be worth upgrading your existing 12bit model for a 14bit model. Also what about the question of a cooled 12bit CCD like the ZWO ASI1600 compared to a 14bit DSLR?

Cheers!

 

For that matter the 700D is also 14bit. You will notice a difference in processing as you can stretch the image further before the histogram starts to fragment if that is a good way of describing it. But that is mainly apparent in when you only have low numbers of subs. With the ASI1600 you gain resolution back in stacking (to an extent) due to the large number of subs used. That not usually the case with a DSLR. But its not a problem as all canon DSLR cameras sold since the 1000D have been 14 bit.

Edited by Adam J
  • Like 1
Link to post
Share on other sites
8 minutes ago, tooth_dr said:

Just out of curiosity as you are quite knowledgable, am I better using my modded 1000d or the 40d I bought a few months ago?

I would not expect them to be very different they both use the exact same sensor 12-bit 10.1mp DIGIC III processor.....I would personally be tempted to use the 40D though as it should run slightly cooler (larger body) during longer imaging sessions. The reason to use the 1000D would be if you wanted to save weight. Its also much easier to self IR modify the 1000D.

I dont know about knowledgeable but I do tend to retain lots of useless information lol....but thanks.

Edited by Adam J
  • Like 1
Link to post
Share on other sites
2 minutes ago, Adam J said:

I would not expect them to be very different they both use the exact same sensor 12-bit 10.1mp DIGIC III processor.....I would personally be tempted to use the 40D though as it should run slightly cooler (larger body) during longer imaging sessions. The reason to use the 1000D would be if you wanted to save weight. Its also much easier to self IR modify the 1000D.

I dont know about knowledgeable but I do tend to retain lots of useless information lol....but thanks.

I self-modded the 1000d when I bought it many moons ago, the 40d is stock.   

Link to post
Share on other sites
9 hours ago, beka said:

Hi All,

I was wondering if anyone has any experience comparing DSLRs with 12bit pixel depth like canon 700D vs 14bit like Canon 800D. Might it be worth upgrading your existing 12bit model for a 14bit model. Also what about the question of a cooled 12bit CCD like the ZWO ASI1600 compared to a 14bit DSLR?

Cheers!

 

The Canon 700D is also 14bit:  https://www.dxomark.com/Cameras/Canon/EOS-700D---Specifications

Mark

Link to post
Share on other sites

Hi everyone,

To those of you that pointed out the Canon 700D is 14bit, sorry I stand corrected. But the issue might still be relevant to those that have 12 bit cameras. Thanks for all the input.

Best

Link to post
Share on other sites

What is the real life implication of having extra 'bits'? I know that for every extra bit you get twice as many more tonal levels, but do you end up benefiting from this in astrophotgraphy? I can see how this would help in landscape and portrait photography where the tonal range needed is far greater, but what about here? The ASI1600m, which seems to be a popular CMOS camera at the moment only has 12 bits. 

Link to post
Share on other sites
2 hours ago, Rico said:

What is the real life implication of having extra 'bits'? I know that for every extra bit you get twice as many more tonal levels, but do you end up benefiting from this in astrophotgraphy? I can see how this would help in landscape and portrait photography where the tonal range needed is far greater, but what about here? The ASI1600m, which seems to be a popular CMOS camera at the moment only has 12 bits. 

Greater benefit in astrophotography, where dynamic range is really important and you may wish to stretch a very short part of the overall tone curve much more dramatically than you would with 'normal' photography.

Although the difference is mitigated by stacking. 4 12-bit images = one 14bit and the low noise of the ASI1600 allows you to exploit the faint signals more.

 

Edited by Stub Mandrel
Link to post
Share on other sites
23 hours ago, tooth_dr said:

I thought the 40d was 14bit though?

That is actually quite interesting as its listed in a couple of places as 12-bit and in other places as 14-bit. I suspect it is probably 14bit...but unless its modified then the 1000D is still the way to go.

Link to post
Share on other sites
2 minutes ago, Adam J said:

That is actually quite interesting as its listed in a couple of places as 12-bit and in other places as 14-bit. I suspect it is probably 14bit...but unless its modified then the 1000D is still the way to go.

Thanks again Adam. :hello2: So the big Q - should I modify it or just sell it and put money towards a cooled CMOS?

Edited by tooth_dr
Link to post
Share on other sites
14 hours ago, Adam J said:

That is actually quite interesting as its listed in a couple of places as 12-bit and in other places as 14-bit. I suspect it is probably 14bit...but unless its modified then the 1000D is still the way to go.

Why waste time speculating when the info is only one quick search away?
http://web.canon.jp/imaging/eosd/eos40d/specifications.html

I think much of the online confusion stems from the 400D having a 12-bit raw format.

Edited by glappkaeft
Link to post
Share on other sites
21 hours ago, Stub Mandrel said:

Although the difference is mitigated by stacking. 4 12-bit images = one 14bit...

 

I've read this and similar claims a number of times now, but what is the technical justification? Has someone explained this elsewhere?

At the end of the day, a 14-bit image has got to be better than a 12- bit image, all other things being equal of course. Wether or not this is noticeable to the eye is another matter ?

Link to post
Share on other sites
2 hours ago, fireballxl5 said:

I've read this and similar claims a number of times now, but what is the technical justification? Has someone explained this elsewhere?

At the end of the day, a 14-bit image has got to be better than a 12- bit image, all other things being equal of course. Wether or not this is noticeable to the eye is another matter ?

Not necessarily. When it comes to proper astro cameras that have RAW formats that are actually raw, the pixel value is simply a count of the electrons in the pixel electron well (1 electron per photon detected). If the electron well cannot hold more than 4096 (2^12) electrons then there is no difference between 12 and 14 bit data.  When it comes to DSLRs the camera sensor specs, like the size of the electron well, are not public and the camera does a lot of processing even on RAW images that depends on the camera settings (iso settings, etc). In that case it is tricky without a good test setup to actually tell if a specific camera gets any advantage of a higher bit depth Analog Digital Converters (ADC) and RAW format. The smart money is probably that most of them do benefit based on the data available from CCDs with similar pixel sizes but it's hard to know for sure.

Fun fact: Many astro cameras that output 16 bit FITS or similar don't use the entire 16 bit range. For instance the popular KAF-8300 CCD has a well depth of 25500 electrons which can be stored using just 15 bits and the small pixel Sony chips chips are in the 9-22 thousand range (13-15 bits) .

 

Edited by glappkaeft
  • Like 2
Link to post
Share on other sites

Here's another way to look at the problem.  I use 12bits all the time for astro-imaging on my Sony A7S. For my long exposures, the sky background is the main source of noise in my image.  Since this noise is also sufficient to dither the step size of the 12bit quantisation, I would gain absolutely nothing by using 14 bits instead.

Mark

Edited by sharkmelley
Link to post
Share on other sites
6 hours ago, fireballxl5 said:

I've read this and similar claims a number of times now, but what is the technical justification? Has someone explained this elsewhere?

Sorry I just noticed that I missed answering your main question. The four 12-bit equal to one 14 bit image is not technically true although it is trueish in many circumstances. It's one of those common but potentially dangerous mental models many people use for stacking. The right way to do this is to look at the actual signal and signal-to-noise-ratio (SNR) and it is not as scary as it sounds.

In technical terms the signal is just the number of detected photons. The SNR is just like the name implies the ratio between just signal and the noise and it is the value that isn't good enough when you look at an image and thinks it looks noisy. The noise in turns be equal to the square root of the signal (the actual nature of noise and why the noise of a signal that behaves like a Poisson distributed statistical processes is equal to the sqrt of the signal is a bit overkill for this post). This turns out to be pretty nice for doing calculations since this also means that the SNR is also equal to the square root of the signal (SNR = signal/noise = signal/sqrt(signal) = sqrt(signal)).

Here is one case where the 4x12 bit = 14 bit model would be close to true. Take two cameras identical in every way (pixel size, quantum efficiency, etc.) except one (Camera A) has pixel wells that can hold 36000 electrons and the other camera (Camera B) has pixel wells with 9000 electrons and test them in the same telescope. If we take a 1 min exposure using Camera A where one specific pixel is just saturated (i.e. has converted 36000 photons to electrons and filled the well to capacity) the resulting value will fit into 14 bits with a little room to spare. Since Camera B has smaller electron wells the same pixel would become saturated after detecting 9000 photons which would take only 15 seconds.  Camera B's image will fit into 12 bits.  

If we process these images the same way but compensate for the lower values in the image from Camera A (i.e. multiply the pixels by 4) we will get very similar looking images except the image from Camera A will look less noisy since it has a higher signal (36000 photons vs 9000 photons for the nearly saturated pixel) and thus also better SNR (sqrt(36000) vs sqrt(9000) or appr. 190 vs 95). However if we grab four 15 second exposures with Camera B and add them, the nearly saturated pixel in the stacked image will have a photon signal of 9000+9000+9000+9000 = 36000 (fits into 14 bits) just like the 1 min exposure using Camera A. If we ignore that reading four images from the sensor instead of one introduces more of the unwanted readout and offset signals (pretty small sources for longer exposures) the two images would be identical.

Edited by glappkaeft
added some missing words
  • Like 2
Link to post
Share on other sites

Out of curiosity, I've looked up figures for my 10D and 450D.

Figures with a dash are from http://www.sensorgen.info/

Figures with an equals are my calculations. Dynamic range is calculated on saturation capacity (well depth) NOT on bit count.

10D

QE - 22%

Min. read noise - 8.6 electrons

Saturation capacity - 38,537 electrons

Bit depth 12 - 4096

Electrons per bit = 9.40

Photons per bit @ 22% QE = 43

Photos to saturate = 1,657,091

 

450D

QE - 33%

Min. read noise - 3.6 electrons

Saturation capacity - 26,614 electrons (rather fewer!)

Bit depth 14 - 16,384

Electrons per bit = 1.62

Photons per bit @ 33% QE = 5

Photons to saturate = 878,262

 

So although the 450D has a smaller well depth, it makes more efficient use of its electrons and vastly more efficient use of its photons. This is partly offset by the read noise, but the difference is less than it appears because of the 450D having 50% better QE.

So if the 450D gets 1000 photons,  the signal will be:

1000/5 = 200 bits +  2 bits read noise.

For the 10D getting 1000 photons the signal will be

1000/43 = 23 bits + 1 bit read noise
 

Clearly the higher read noise of the 14-bit camera is totally irrelevant in the context of the far greater resolution.

Also, although the 10D has lower QE and greater well depth (saturation capacity) it can only take twice as many photons.

Taking the ratio between  1 photon count and a full well

The 450D has a dynamic range of 52dB

The 10D has a dynamic range of 35dB

 

Even allowing for the better QE of the 14 bit camera, I think this explains why my 14-bit 450D knocks the 10D into a cocked hat.

It's worth remembering that cameras don't actually use the full bit count available, so the difference is probably even more marked.

Link to post
Share on other sites
23 minutes ago, Stub Mandrel said:

 

Taking the ratio between  1 photon count and a full well

The 450D has a dynamic range of 52dB

The 10D has a dynamic range of 35dB

 

Sensorgen actually gives the 14bit 450D a lower DR than the 12bit 10D:

  • 10D:  DR = 11.0 stops
  • 450D:  DR = 10.4 stops

Do they use a different definition of DR?

Mark

Edited by sharkmelley
Link to post
Share on other sites
1 hour ago, Stub Mandrel said:

So although the 450D has a smaller well depth, it makes more efficient use of its electrons and vastly more efficient use of its photons. This is partly offset by the read noise, but the difference is less than it appears because of the 450D having 50% better QE.

So if the 450D gets 1000 photons,  the signal will be:

1000/5 = 200 bits +  2 bits read noise.

For the 10D getting 1000 photons the signal will be

1000/43 = 23 bits + 1 bit read noise

Firstly, when you say bits do you mean ADU ("Analog to Digital converter Unit" AKA pixel value)? Secondly, you are not calculating the signal, you're calculating the pixel values which is a thing we generally already know. 

Signal is simply the number of detected photons. So either 1000 for both cameras (if you mean that the photons where all detected) or 330 vs 220 (if the photons hit the pixel and we then take QE in account). Note that lower resolution of the 10D sensor will mean that its pixels are significantly larger so will be hit by almost exactly twice as many photons everything else being equal so the 10D would come out ahead per pixel even with its lower QE. The larger electron well is also due to the larger pixel size. These factors explains why older cameras like the 10D can do surprisingly well in practice.

The main problem for the 10D is that the electron well is so much larger what what the 12 bit RAW format can handle. This could lead to issues like banding if you stretch the data enough and other quantisation problems.

What you really want to do is to get the signal (the property that can be compared across different equipment) from the pixel value. If you know the processing done in the camera this can be very easy to do. For astro cameras it is often trivial, there is no hidden processing and the gain is usually 1 (unity gain 1 electron = 1 ADU) so in most cases pixel value is the same as number of photons. In the other cases you just look up your sensors gain (if the documentation or camera driver doesn't provide this information you should complain) and multiply or divide by it. Serious imaging software will read the needed data from the camera driver or let you input it.

With DSLRs it gets tricky, the gain (electrons per ADU - the thing you assumed was 9.6 for the 10D) will vary depending on ISO settings. Sensorgen says the saturation level  of the 10D is 2308 electrons at ISO 1600 vs 38537 at ISO 100, a clear sign that the gain is very variable. No DSLR maker makes this information public and here will also be other processing done, some of it non linear. For example there is indefinably some sort of dark current and/or bad pixel map processing going on. In these cases its probably best to give up if you want an accurate value usable for comparisons.

Pixel values can be used as a proxy for signal for a specific setup, it's just not comparable between different equipment and settings.

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    No registered users viewing this page.

  • Similar Content

    • By endless-sky
      After a 20 year long hiatus - my last astrophoto was captured with a film camera in 1997 - at the beginning of 2020 I decided it was time to start again.
      So, January 25th 2020 I brought home my used Sky-Watcher NEQ6 Pro and I immediately started taking photos. Obviously, my first target was M42 in Orion.
      This was my first digital astrophotography. 31 subframes, 30s each, taken at ISO800 with my unmodified Nikon D90, Nikkor 70-300mm at 300mm f/6.3 - January 28th, 2020, home front yard, Bortle 5/6 sky, no guiding, no filters. A grand total of 15.5 minutes...

      A couple of weeks later, me and my wife went to spend Valentine's weekend in the mountains. Of course I couldn't avoid taking advantage of the Bortle 4 sky and I took all my gear with me. Same target, 52 subframes, 45s each, taken at ISO800 with my unmodified Nikon D90, Nikkor 70-300mm at 300m f/5.6 - February 14th, 2020, Tonadico, Bortle 4 sky, no guiding, no filters. 39 minutes total integration.

      After I finished post-processing the second photograph, I was so happy with the result. It felt amazing that I was able to capture so many details and more nebulosity compared to the photo taken from home.
      Months passed, gear was changed. First one being the camera: at the end of February I bought a Nikon D5300 and a couple of months later I astromodified it on my own, adding a UV/IR cut filter in front of the sensor, after cutting it to size.
      In October the rest of the setup finally arrived: Tecnosky 80/480 APO FPL53 Triplet OWL Series imaging telescope, Artesky UltraGuide 60mm f/4 guide scope and ZWO ASI 224MC guide camera. Also, an Optolong L-Pro 2" light pollution filter.
      After months of imaging and getting more experienced with PixInsight, it was just a matter of waiting before I could have another go at one of my favorite targets. And maybe give it a little more justice.
      This project took me more than a month, due to the rare clear nights opportunities I have had here lately.
      I started acquiring in January and finished a couple of weeks ago.
      M42 taken over 8 nights, under my Bortle 5/6 sky.
      Total integration time: 18h 04m 00s for the nebula. 714s (14s subs) + 2065s (35s subs) for the Trapezium and the core.
      Here are the acquisition details:
      Mount: Sky-Watcher NEQ6 Pro
      Telescope: Tecnosky 80/480 APO FPL53 Triplet OWL Series
      Camera: D5300 astromodified
      Reducer/flattener: Tecnosky 4 elements, 0.8x
      Guide-scope: Artesky UltraGuide 60mm f/4
      Guide-camera: ZWO ASI 224MC
      2021/01/12: Number of subs/Exposure time: 33@300s. Notes: L-Pro filter, no Moon
      2021/01/13: Number of subs/Exposure time: 33@300s. Notes: L-Pro filter, no Moon
      2021/01/15: Number of subs/Exposure time: 38@300s. Notes: L-Pro filter, Moon 8% illuminated
      2021/01/18: Number of subs/Exposure time: 36@300s. Notes: L-Pro filter, Moon 30% illuminated
      2021/02/13: Number of subs/Exposure time: 30@300s. Notes: L-Pro filter, Moon 4% illuminated
      2021/02/14: Number of subs/Exposure time: 23@300s. Notes: L-Pro filter, Moon 9% illuminated
      2021/02/15: Number of subs/Exposure time: 51@14s + 48@35s. Notes: L-Pro filter, Moon 15% illuminated
      2021/02/17: Number of subs/Exposure time: 11@35s + 38@180s + 1@300s. Notes: L-Pro filter, Moon 30% illuminated
      Total exposure time (main integration): 65040s = 18h 04m 00s.
      Total exposure time (35s integration): 2065s.
      Total exposure time (14s integration): 714s.
      Pre and post-processing: PixInsight 1.8.8-7.
      Full HDR Version:

      Masked Stretch Version:

      Blended Version (50% HDR + 50% Masked Stretch):

      To my personal taste, I like the blended version the most. I think it brings out the best of both worlds (HDR and soft, less contrasty but more colorful look).
      I must say, I am very pleased and happy with the result. Not to boast, but I think I have come a long way since I started.
      Obviously the better gear and the much, much longer integration time helped.
      I think I actually spent more time post-processing it than acquiring it. Especially since I had to do the work almost twice: I post-processed the HDR and the Masked Stretch images separately, making sure I used the same processes and with the same strenght in both, so that I could combine them effectively, if I decided I didn’t like the look of the HDR alone. I also think I managed to tame the stars a lot more, compared to my previous post-processing attempts.
      As usual, here’s a link to the full resolution image(s): Orion Nebula (M42), De Mairan’s Nebula (M43) and Running Man (NGC 1977)
      Thanks for looking!
      C&C welcome!
       
    • By sulaco
      Hi, 
      Thinking of getting the Asiair pro whenever they come back into stock but wondered about the voltage output for dslr, it states that it’s12v but would that not need to be stepped down to 7.5v for dslr. 
      im wondering if the Pegasus power box micro with Stellarmate might be a better option as the dedicated power box is controllable. 
      I tried Ekos a couple of years back and had nothing but problems but tried again last night and was amazed at how slick it was, best guiding and first time plate-solvingwas effortless. 
      I have the original zwo  120mm  so reluctant to get a new one unless I have too. Has anybody been using the new advanced or micro  power boxes either stellarmate?
      Thanks
       Campbell
    • By SpaceDave
      Hello all. I’ve tried a few times in the last month to image Mars but have had very little success. Although a decent size, Mars is very blurry and wobbly. I am fairly new to the hobby, but I would say it appears to be poor seeing conditions. 
      I am using a Celestron 6SE and Canon 600D. I have tried 2x and 3x Barlow. I focus using a bahtinov mask (on stars). I used movie crop mode on various ISOs and exposures, stacking at least 3000 frames (keeping the best 1%, 2%, 5%, etc).
      Is Mars too far away now? Or am I underestimating how rarely you get a night of good seeing? How do you find out when the best seeing will be?
    • By DeepSkyBrad
      I've just had a Canon EOS 250d modified by Juan, IR filter off and shim to restore focal plane.  He previously did the same for a 100d which got me going in the hobby.  Juan is willing and able to take on successive generations of camera.  I prefer to rely on Juan's experience for this task, despite me being an optics specialist professionally.   The cost of the camera and Juan's conversion service together are a bargain and I trust him with a new camera.   
      I like to use this type of imager over the specialist cameras because they are the result of Canon's massive R&D capability and bundle together all these functions:  battery, an up to date sensor chip, the on-board software, on-board storage, built-in display, easy-fit Astronomik filter. In the case of the 250d, that very important tiltable display so you don't have to crawl around on the wet lawn to see it.  The only thing they don't have is an easily-implemented thermo-electric cooling.  But I've got a long way to go in astro-imaging before I care about noise that much (though I'd like to cool, and understand what can be achieved, I use stacking averaging in the meantime to go part way in that respect).
      The dslr is my one imager for three rigs, the most notable being that it's lightweight enough to go on my Omegon clockwork mount. 
      On my heftier rig, I do have an Altair camera with a Sony back-lit chip but only use it for tracking with a wireless-controlled Stellarmate setup, having got fed up with all the cables and tablet pc with memory dangling off it.  
    • By DrummerSP
      Hi!
      Firstly apologies if this is in the wrong section (this is my first post on any kind of forum!) and I'm aware its a question thats probably been asked thousands of times. Please let me know if I should post elsewhere or anything...
      So I've made some progress with astrophotography but as much as I try I get so confused with lenses and the specifics so thought I'd ask people who understand them more. My setup so far is a Canon EOS 550d, with a 70-300mm f4-5.6 lens all on a Star Adventurer pro tracker (recently upgraded from an Omegon LX3 mini track). The camera and lens were both second hand and passed down to me so I dont really know how old they are now but I've been getting good results so far (uploaded one of my recent images, still using the minitrack for reference)
      Basically I dont know if I'm better off upgrading the camera and sticking with the telephoto lens (from what I can find its a good lens), or would changing to a small telescope be better. If I was to, from what I've found the Sharpstar 61EDPH II would be a good choice?
      I've researched a lot and just dont understand the technical side enough to know where I'm better off putting my money. My budget would be around £1000, maybe slightly more for a camera as I do use it for other photography too. Any advice would be very appreciated, sorry for the long post!

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.