Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

vlaiv

Members
  • Posts

    13,026
  • Joined

  • Last visited

  • Days Won

    11

Everything posted by vlaiv

  1. Indeed - when I say there are different methods of resizing - I mean interpolation. Binning is sort of interpolation algorithm, but it works only on integer down sampling factor. In fact - binning 2x2 is mathematically equivalent to 50% size reduction + (0.5px, 0.5px) translation with use of bilinear interpolation method (average of 4 adjacent pixels is the same as linearly interpolating at point that is corner joining all 4 pixels - if pixels were considered to be squares). I mentioned one advantage of binning - no correlation introduced. Don't know how much you know about interpolation, but different interpolation algorithms use more than just surrounding pixels to calculate value when resampling - bilinear uses 4 nearest pixels, but bicubic uses 16 pixels around wanted point (matrix of 4x4 - as you need 4 values to fix cubic function, 3 values to fix quadratic function and only 2 to fix linear function). There are of course other interpolation methods and each of them will have different properties with respect to details in the image and impact on noise.
  2. Don't have much info on that mount. Most of info on mounts that I have, comes from reading about the specs and more important - first hand experience. I don't remember reading any first hand experience with that mount from people doing AP. It does not mean that it is not good mount. I know people owning older brother of that mount - AZEQ6 who are perfectly happy with their mounts. AZEQ6 performance is similar to the performance of EQ6 - maybe a bit better than old EQ6 and in line with EQ-6R because both AZEQ6 and EQ6-R have belt drive. AZEQ5 also has belt drive, so that is a plus. I've read once that someone found that lead that connects RA and DEC axis to be awkward. This one: Fact that this mount has both AZ and EQ operation does mean it is preferable in rather specific situation - where you want to use it as both Imaging and Visual mount (I'll address this point later with some additional info). As for imaging performance, I can only sort of guess what it will be like based on numbers. On this page, we have information on internals of this mount: http://eq-mod.sourceforge.net/prerequisites.html One thing that I'm slightly concerned is 0.25" stepper resolution - it is less than Heq5 class mount by almost double, and just a bit more precise than Eq5 class mount. With stepper motors this limits how well guided this mount can be. I think that realistically you can expect about 1" RMS guiding with it, maybe down to 0.8" RMS on a good night - which again means that max resolution that you can go with such mount is 1.5"/px - 2.0"/px - which is ok, that is sort of smaller scope - up to 700mm focal length with sensibly sized pixels (or some sort of binning to get you to that resolution). It looks like this mount has PPEC, and this is good. My Heq5 does not have this feature and I use VS-PEC in EQMod - which means that I can't really use my mount for AP and visual, because I would like to be able to use mount for visual without dragging laptop outside - with only hand controller. But that messes up my periodic error correction since it is not permanent and I need to park to exact position each time I finish using my mount otherwise PEC will go out of sync (no encoders on the mount). AZEQ5 has encoders and PPEC so it is well suited to be used in both roles at the same time (well not at the same exact time - but you know what I mean - you don't to do anything special if you image one night and observe the other in AZ configuration). If you want better precision in imaging then go with either Heq5 or EQ6-R or AZEQ6 - all of which are heavier mounts but offer better precision for AP. Mind you - stock HEQ5 / EQ6 is going to have vary greatly sample to sample in their performance and only once you tune it and mod it - it will be able to deliver best performance. I stripped my Heq5 and changed all the bearings, I did belt mod on it, replaced saddle plate and changed tripod and now it guides at 0.5" RMS, so if you are going to keep things stock - not sure if price premium (of EQ6 class) and weight are worth it if you have your heart set on AZEQ5. Hope this helps
  3. This is not easy topic, and in fact, many will point out (me included) that mount is the most important piece of kit when it comes to imaging. In budget category, and all the mounts you've listed are budget category mounts, we can roughly say there are two important things that you need to pay attention to: - weight capacity - accuracy of tracking Btw, I said budget mounts - but that does not mean all of these are poor mounts - a lot can be accomplished with such mounts - I have Heq5 for example and it serves me very well within its limits. Weight capacity - you need to have at least 50% of head room with respect of all your imaging gear to be placed on the mount (this is just a recommendation - more stuff you put on the mount - greater chance something will not work as it should - mount will not track properly, wind will be more of an issue, something will not work as expected). My Heq5 is rated at 15-18kg (depending on the source) and I've put as much as 15kg on it for imaging and it worked, but I did not feel comfortable with that much weight on it. Nowadays, I limit weight on the mount to about 10-11kg (maybe 12kg, but no more), so keeping about 50% of headroom is very sound advice. Accuracy of tracking - most of these budget mounts will have issues unguided. Most suffer from periodic error and greatly benefit from guiding. Longer the focal length (or to be more specific - higher sampling rate you use) - shorter your exposures will need to be to avoid star trailing. Exception to above are mounts with encoders, and in this price range, as far as I know, only iOptron offers encoders. Encoders are really expensive and it is much cheaper to get guiding kit. Guiding solves some other issues that encoders can't (at least not without very sophisticated software and building of sky model - something that is done in permanent setups as it is time consuming to be done on each session). Eq3 / Eq35 are really very basic mounts that can hold camera + lens and very small scopes. You will be very limited in exposure length with those mounts and will benefit greatly from guiding. Eq5 is step up from above two in terms of weight capacity and performance, but same holds - smaller / lighter scopes (up to say 6-7kg of total weight, btw when I say total weight - that does not include counter weights, it just means scope and camera and any other gear attached to it) and if possible - guide it. I would personally avoid AVX mount as I've read that it suffers from some issues that make it less than desirable imaging platform. Take this with grain of salt as I've never even seen one live, let alone used one. iOptron mounts are said to be good, and I've read many excellent reports on their performance. I would personally probably go for iOptron with encoders (EC model) if I don't plan to guide. Since you are in Texas, you'll probably get better prices on iOptron then we get here in Europe, so that is a plus. If you can - get iOptron CEM40EC, it is over your budget at about $3000 - but it is Heq5 class mount with 18kg payload and it has encoders, and is lighter weight. CEM25EC is EQ5 class mount - so keep weight up to 8-9kg on it (it has 13kg load capacity) and again it has encoders - which you want if you don't want to guide. That is about mounts, now about scopes and camera. ASI120mc is very good planetary camera and very good guide camera. You can use it for EAA/EEVA with 130slt, but that is about it. You can try imaging with it - I've done it and even managed some decent images, but that sensor is very small. If you are serious about imaging - you will need better imaging gear. Look into Skywatcher ED80 + flattener / reducer or Skywatcher 130PDS newtonian and coma correcotr and Canon DSLR. Something like used Canon 450 or similar will be very good option. Imaging is rather serious business if you want to do it right and there are many aspects of it, and best thing you can do is do a lot of research before you commit to particular gear. Book - "Making every photon count" is said to be very good book for anyone planning to get into astro imaging. Of course, SGL is place where you can read a lot about all topics that interest you and ask questions, and hopefully get decent answers.
  4. I think that you have all the gear that you need to do this - it just takes a bit of fiddling with the data that you capture in order to turn it into something that is useful. You have ASI120MC, and I presume you have small lens that comes with it - all sky lens? If not - it is really not that expensive to get one. With it you can create images like this: That gives you coverage of whole sky from certain vantage point. People sometimes use such images to create LP maps for their location - like this: You could mount such system on your car and then gather similar images from different locations, and then it is down to interpreting that data - you could do some sort of 3D visualization of that data as for each location you have total LP coming from a certain direction - multiple such points would help build a model of LP in certain volume of sky - and then you could try to simulate ground lighting that produces such sky glow. I mean this is advanced stuff as you would need to model scattering in atmosphere and types of illumination from the ground, but just a set of images like above one with map and direction of main sources of LP - and coordinates where those directions intersect could point out to particular LP source (a bit like triangulation of radio sources). For example, in above image it is evident that strongest LP comes from about 350° (which is not surprising as it is direction of my home town for location above image was taken and it glows bright ) - but in your images - that would be one line for intersection on the map.
  5. That 1.6 is related to Nyquist sampling theorem and is approximately sampling rate that corresponds to given FWHM. If FWHM is expressed in arc seconds, then dividing with x1.6 will give you sampling resolution in arc seconds per pixel. If FWHM is expressed in pixels - then dividing with 1.6 will give you "pixel size" - or factor to reduce your image by. In above example if FWHM is 4px then ideal pixel size is 2.5 - so you need to reduce image by factor of x2.5 Binning is form of resizing the image down and has some advantages over regular resizing down. It reduces noise better. Every resizing down of the image reduces the noise in the image, but different resizing methods reduce noise by different amount. Binning just adds adjacent pixels together and forms one large pixel out of group of 2x2 or 3x3 pixels. For this reason it can only resize down by integer factor - either by x2 or by x3 or x4, etc ... (it can't reduce size by x2.3 for example). However such reducing down has very good effect on the noise - it also reduces noise by x2, x3, x4 .... etc and it is mathematically predictable in the way it changes noise (no correlation between pixels, always exact improvement in noise and such) and it is therefore preferred way to do things in astronomy (science side of things).
  6. Hi, you need not worry about display size - it will always be "fit the screen" by default, regardless of resolution of underlying image. Some people, me included, like to view image at 1:1 setting or 100% zoom, even if I need to pan around and that is possible if you open image by itself in browser - I usually hit right mouse button and do - open in new tab ... At first image will be scaled to fit the window again - but simple click on it will expand it to full size and you can pan around. I mention above because of what I'm about to say next. There is proper resolution for astronomy image, or rather range of proper resolutions. Sometimes with modern cameras and larger telescopes (longer focal length) people make image that is just too zoomed in when viewed at 100%. Stars are no longer small dots but rather "balls" suspended in space and everything starts to look blurry on 100% zoom viewing. It will still be nice image to look when it is viewed scaled to screen size. Such images are worth down sampling to appropriate size as it will make viewing at 100% more enjoyable. Luckily there is simple technique to determine proper sampling rate for the image - you need to measure star FWHM (Deep sky stacker gives you this information for each frame) and divide that with 1.6. If your star FWHM is 4px then 4/1.6 = 2.5 - you need to resize your image to 2.5 smaller size. If you get number that is less than one - just leave image as is - don't enlarge it as it will be blurry (enlarging won't bring missing detail back in).
  7. Why does this bother you? That is perfectly normal for sensor that is not cooled - you take dark frames and after dark calibration you should not have any more hot pixels remaining and if you do - dithering and sigma reject will sort it out. Btw - here is screen shot from a piece of my dark frame (in fact master made out of 16 subs cooled at -20C): Plenty of hot pixels there as well. It is not about how many hot pixels you have - it is about how hot are they and if you can calibrate them out. Can you post a single raw file dark sub, or better - maybe couple so we can try dark/dark calibration and see if these hot pixels calibrate out?
  8. Probably I mean - as DIY then sure, if that is a challenge for you and you like that sort of challenge - sure. What I'm trying to say is that you should not go for it based solely on expectation that it will provide good tracking for such a large scope because it is better design than EQ platform. Unless you are very very skilled at building things - odds are that you will have large PE and only planetary and lucky DSO imaging will only be possible anyway.
  9. Then have a look at this: It is not fork mounted and needs to be rewind for each use - you get about 30mins to 1h in one go, but it is much easier to make (and cheaper).
  10. EEVA section exists here on SGL, but it is often called EAA - Electronically assisted astronomy. EEVA stands for electronically enhanced visual astronomy (or similar, I'm not 100% sure). EEVA is a bit broader term than EAA as it includes night vision devices, where original usage of EAA term was using of video cameras and recently cmos cameras and viewing recording on monitor / computer screen. It is very close to planetary style / lucky DSO imaging. In planetary style lucky imaging exposures are very short - order of 5 to 10 ms. For Lucky DSO imaging - exposures are kept short at about 1-2s, while EEVA / EAA or "Live Stacking" as it is sometimes called uses exposures longer than that, but still shorter than regular DSO imaging - from few seconds up to dozen or so seconds (sometimes people use half a minute exposures). Point with EEVA is to watch image of target build up in (near) real time - so you observe for few minutes (and stack up to 30-40 short exposures) and the move on to different target. This requires goto and computer control to locate next target, but in principle you can move scope by hand. In any case - do search for EQ platform as that is going to be by far easiest solution to either purchase or DIY. It will let you do most of things mentioned here - Planetary for certain and Lucky DSO imaging. Depending on tracking accuracy of EQ platform you might even be able to do EEVA. Another solution that you might want to try is friction gear instead of worm. That one has both advantages and disadvantages compared to worm.
  11. Random noise is going to dither quantization provided that is of proper magnitude with respect to quantization. One of the reasons why sensor designers leave certain amount of read noise present, or rather "tune" read noise levels - to dither things. Here we are talking about stacked image. Noise will drop as a function of number of stacked subs and at one point noise will be too small to dither quantization. We are also talking about signal and the fact that signal needs to have enough bits of precision to be properly recorded. If signal for example has dynamic of 5-6 bits, then it should really have at least 5-6 bits of storage to be recorded. If you give it 2-3 bits it will be posterized due to rounding. In any case, here is what you've proposed: First measurement is that of 32bit image - small selection of the background. Standard deviation of that patch is ~0.142745 Next measurement is that of whole image. Important thing to note is that here values are up to 12 bit or a bit less (<4096 because of 12 bit camera, offset removed and flat calibration performed) - but format is 32bit float. This will be needed later to do "conversion" of noise. Third measurement is small selection after converting image to 16 bit format and last one is full image at 16 bit. We can now compare noise levels in both images. Converted noise level from 32bit image would be 0.142744802 * 65535 / (3401.327392578 + 1.790456772) = ~2.748885291 while measured noise level is 2.76030548 or difference of about 0.4%. Many will say that increase in noise of less than one percent is not significant - but we don't know how distribution of the noise changed - and this was only due to bit number conversion. Now let's examine something else that is important - here is screen shot of section that I'm examining now: I tried to select part of the background where there are no stars but there is variation of brightness in nebulosity. That part of the image has something like 6-7 in terms of dynamic range. It has max value of 1.16 and noise of 0.167 so dynamic range is ~6.95. This is of course in floating point numbers so there is plenty of precision to record all information but what will happen if we convert it to 16 bit? We have seen that it takes multiplying with about x19 to convert (65535/~3402) and here we have total range of about 1.5 (from -0.37 to 1.16), so converted range will be ~28.9. That is less than 5 bits total (not dynamic range) - we have at least 2 bits lost for that data or x4 number of levels (128 vs 32 or less levels). This is why my above example shows posterization in faint areas - because it is really there.
  12. Why would you have your stepper running at 200rpm? I think you are approaching this the wrong way. If you want to determine good reduction ratio for a stepper motor to be driving RA - you need to think in terms of resolution rather than speed. Stepper motors have about 200 steps (1.8 degree per step) and each of those steps has certain number of micro steps - let's say you will do 64 micro steps per step. You also want good resolution of about 0.1 arc second per step. This means that full circle will have 360 x 60 x 60 x 10 = 12960000 steps - that is you need 12960000 steps, 0.1" each to make full revolution. With 200 steps per circle and 64 micro steps - one revolution of stepper motor will have 200 x 64 steps = 12800 steps. Reduction that you will need will be - 1012.5 : 1 reduction In fact, HEQ5 equatorial mount has something like 0.14" per step and worm gear system having 705:1 reduction. If you are going to use dob mount, base that enables azimuth movement has base with diameter of at least half a meter. That means that circumference of it will be at least meter and a half. You will have no problem getting 1000 worm teeth on such diameter and using simple screw that has 1mm pitch to drive it for reduction of about 1000:1 In fact - this is what is commercially available for dob mounts in terms of EQ platforms, and I would recommend you looking at one to see if that will satisfy your photographic needs. You did not mention what sort of photography you want to do. EQ platform will be more than enough for EEVA and planetary, and even some short exposure, lucky type DSO imaging. Of course, if you are into DIY than this fork mount thing could be nice project, but so would EQ platform.
  13. Sure you can - just use lower quality setting in jpeg
  14. It could help if you can sacrifice high part of the range. Let's put it in simple numbers to explain what will happen. Imagine you do 1 minute vs 10 minute subs. Signal level in 1 minute sub is at 2% of full well capacity. Stacking bunch of such subs with average method will leave signal level at 2% (take bunch of 0.02 and average them - you will get 0.02). Similarly in 10 minute sub - signal will reach 20% of full well capacity, again stacking with average will leave that at 20%. Signal at 2% will have 10.3 bit of dynamic range, while signal at 20% will have 13.7 bit of dynamic range - clearly better and there will be less posterization of faint stuff. However by using 10 minute subs - you will blow out more cores, in fact you will saturate with signal x10 weaker than in 1 minute range. This is what it means to loose high part of the range. If you try to mix in that high range in linear stage - you will suppress lower range again - only way you can mix in burn out features is via layers in PS - the way Olly does it and often suggests that should be done (because it works for him with this approach).
  15. I believe there should be difference, and just how much - depends on your imaging workflow. Using very long exposure and fewer number of them - will not put all signal in low range and consequently it will be less posterized by use of 16 bit. Here is example of that happening - I used my H alpha stack (4 minute subs, 4h total, binned x2 for 1"/px sampling rate) in 32 bit and same image first converted to 16bit. I used just one round of levels - same on each: and here is same done on 16 bit version: See how posterized faint regions become? This image is made out of - let's say 4 x 16 x 4 = 64 x 4 = 256 stacked subs (4 minute and 16 subs per hour make 64 subs total, but I did bin x2 with average method so that is another x4 in number of samples per pixel). That is enough data with small signal to keep things in low values and show posterization. Maybe posterization won't be as bad with 30-40 ten minute subs
  16. For 16 bit - I recommend against it on a principle - it is limited in dynamic range. It will not be much of a problem if you for example have high dynamic range image and you stretch it to an extent while in 32bit format and then save it as 16bit. Stretching "compresses" dynamic range and you don't loose much. That is "a trick" I used when working with StarNet++. It requires both stretched data and 16bit format to remove stars and when processing NB data - I first do a stretch per channel - but only something like 1/4 of what I would normally stretch - since I want to later stretch more and denoise data after removing stars and also want to do channel mixing. Problem with 16 bit data comes when you use it on your linear data to start working on stretching. We have seen above how limited 8bit data really is. Using short exposure that is common with modern sensors (cmos in particular) and because more and more people image in LP and will not benefit from long exposures - makes stacked images very "compressed" in left part of histogram - low values. Imagine that all truly interesting signal is in 2-3% lower part of histogram (left part). That means that this signal occupies only 2-3% of 16 bit range. In values this would mean that this signal only has 65535/40 = ~1600 levels. Now we are down to 10.5 bits - very close to 8bits - you will soon start loosing detail in faint parts of the image. Average galaxy has something like 7-8 mags of dynamic range or even more and guess what? 8 mag is about x1600 between brightest and faintest part - or said 10.5 bits.
  17. Simply put - 16bit image does not hold enough information that you obtain by stacking and calibration, but more important thing - it is fixed point format. Which means that you not only limit data per pixel at 16 bit but you also limit total dynamics of the image to 16 bit. 32bit floating point does not have much more bits per pixel to hold information, it is only 24bits so only 8 bits more than 16bit format, but it is floating point precision - which means that it has huge dynamic range from to (source: https://en.wikipedia.org/wiki/Single-precision_floating-point_format) What does this mean? Well let's do simple example - we stack by adding 4 subs from 14 bit camera. We have a very bright star and we have background with no light pollution. First is almost saturating 14bit range at 16384 and later is sitting around 0 (we have read noise so it is in some +/- read noise range, shifted with offset but let's ignore details for now). Star will add to 16 bit (4x16384 = 65536 = 16 bit), while noise around 0 will add to be again noise around 0. Stacking increases dynamic range of whole image besides needing more precision individual pixels. This creates problem with fixed point representation because there is fixed ratio between strongest pixel and weakest pixel - it is always only 16 bits in 16bit format or x65536. If you will, we can convert that into magnitudes and it is about 12mags. You simply put a firm limit on dynamic range of your image at 12mags. If you record a signal that has some intensity - signal that is 12mags fainter will be a single number - a constant value - or there won't be any detail (no variation in that single value). In comparison 32bit float point has 24 bits of precision per pixel (that means that you can stack 256 subs of 16 bits each until you start to need more "space") or in another words error due to precision will be 1 in 16777216. But more importantly you can record much higher dynamics in your image - about 10^83 or in magnitudes - over 200 magnitudes of difference in intensity.
  18. Well, for starter Jpeg is lossy format - which means that it alters your image (looses information). You can check that it does so by using regular 8bit image saved as PNG and then, same image saved as Jpeg (even at highest quality setting) and subtract the two - you will not get "blank" image. Here is an example: This is famous Lena image (used often as test image for algorithms) - left is unaltered png, and right is same png image saved as 100% quality JPEG (chroma sampling 1:1 and such). And here is what you get if you subtract the two: There is clearly something done to the Jpeg image that makes it different than original image. Now, let's do another experiment to see how higher bit count fares versus 8bit format. This is single frame (binned to oblivion to pull the data out and make it small and easy to copy/paste) of 1 minute exposure in 32bit format - prior to stretch: This is exactly the same image, except converted to 8bit format: So far, so good - not much difference, but let's stretch a bit that data and see what happens: Here is 32bit version with very basic stretch, btw stretch is saved as preset: Here is the same stuff in 8bit format: Look at that grain and noise, that stuff was not in above image, clearly 8bit image can't take same level of manipulation as 32bit image.
  19. I'm probably one of those people. For me, this is break down of bit format and usage: 16 bit - good only for raw subs out of the camera and it's usage should ideally stop there ( I know that some people use 16bit format because of older versions of PS, but no excuse really ) 8 bit - good only for display after all processing has been finished 32 bit float point - all the rest.
  20. I can try, but have no idea what is being discussed here (sorry, I did not read thread posts). I can see that it has something to do with Astrobin having issues and you are mentioning file formats and file sizes?
  21. Don't get me wrong - I like the spikes when image is created with reflector. I was just pointing out that artificially added spikes to look bad compared to "natural" ones (in terms of what they look like and how they behave - artificial ones usually don't follow laws of physics and look differently then "natural" spikes). But, yes, it is matter of taste - some people probably like such spikes. #matter of taste
  22. Using any sort of interference filters? You certainly are asking for reflection trouble . Good/bad thing about it (depends on how you look at it) is that you don't really have much control over it, or rather you have no idea for the most part of how your actions will affect end result. One configuration might lead to very bad reflections, then change something by very small amount and reflections are gone. This is because light interaction with itself is complex thing and depends on very short distances - order of wavelength of light in question (it is due to interference of light with itself). Could be that you will have reflections in certain combination, but probably best attitude to have towards that fact is: "Cross that bridge when we come to it ....". In general no. Sometimes you need to have your IR/UV cut filter "permanently" mounted, but most of the times having double stacked filters hurts your efforts unless you have very specific reasons to stack filters. In your above case - it would probably hurt more then help. If you look at transmission curves of filters you are using together, you will see that they are redundant. In fact, here is good example for and against having stacked UV/IR cut filter: This is comparison between CLS-CCD and CLS (plain or visual) transmission curves. CCD version of CLS filter does not pass any light below 400nm and no light above 700nm (same as UV/IR cut filter would do) - so in case you are using CLS-CCD filter - UV/IR cut filter is not needed. In case of plain CLS filter used mainly for visual, things are different - that one does not filter out light above 700nm. This is IR part of spectrum and human eye can't see it, but sensor can detect it and refracting telescopes are not well corrected in that part of the spectrum. In this case you need UV/IR cut filter. I've shown you example where you need to have UV/IR cut filter combined (other cases include some RGB filters and in general any filters that have "leaks" in UV or IR part of the spectrum and you are using refractor - then you need stacked UV/IR cut filter), and example where you don't need one - but does it hurt to have one? Well it does. A bit - and again that will depend on filters. First thing - more possibility of reflections. In your case this is minimized by large spacing between filters. Second thing - you can see from the graph above that filters don't have 100% transmission and cause some light loss. If you don't need filters stacked - why block light more than you need to? 90% * 90% = 81%, so you can loose as much as 10% of light when you stack filters. Third thing is that filters are not ideal in optical performance - they distort light, and although that distortion is low and filters are usually 1/10 wavelength in wavefront aberrations - again such aberrations compound together like light loss - so why distort wavefront more than you need. I want to address one more thing in the end - distance of filters from the sensor. That is sort of battle of two things - you want your filter close enough to sensor as not to introduce vignetting (which depends on sensor size, filter size and speed of telescope light cone) but you also want your filters far away enough to reduce impact of reflections. Reflections are always there - it is just about amount of light that gets reflected and how concentrated that light is on chip. By having filter (or other source of reflected light) further away from sensor - reflected light reaching the sensor will be more out of focus and thus spread over larger surface - which means each pixel will receive less photons, and if level of photons from reflection is below noise floor - you will not see it in the image. Since you are using 2" filters - you can move your filter drawer away from camera without much fear of introducing vignetting because of that. This means that you have some "room for maneuvering" if you get reflections from your filter in the drawer - you can always swap filter drawer and extension tube positions in your diagram above - that moves filter further away from sensor and yet keeps total distance between FF/FR and sensor the same.
  23. What you have depicted in your diagram is just ideal case, and ideal case is never going to happen in real life. Just the fact that one will be adding eyepieces and messing with focuser (racking in / out) makes center of gravity move around. Such a small deviation won't make system unstable and prone to tipping over. If people have any issues with center of gravity - they can always change it, and often do when using heavy EPs. Some eyepieces and coma correctors combined can have more than 1-1.5kg of weight and often people attach counter weight on other side of OTA to balance it and stop it from "dipping" down. In similar way you can always add some weight on your dobsonian base to move center of mass up and down if you think your rig is not stable enough.
  24. Not sure if I can give good advice on this as my experience is rather limited, but here are my observations on the EPs that you listed: - I used SW UWA 58deg 7mm and I also used 6mm BCO (I see you have BGO 6mm listed in your signature). There is simply no contest between the two SW UWA that I had was indeed much more "user friendly" in terms of observing comfort due to longer eye relief - but that is it. Noticeably less sharp overall, threw ghosts on bright targets like Jupiter with my 8" F/6 dob. I don't remember pushing it much in terms of faster scopes although I'm certain that I used it with F/5 refractor at the time but since that refractor is achromat and was not meant to do high power views - I can't remember / comment on edge performance. I can only say that I don't remember any sort of disaster on the edge of the field. Don't have it anymore - I recently purchased ES62 5.5mm and I find it very good. Again, experience is very limited, but I'm happy with that EP in terms of ergonomics and field of view. Sharpness is also very good. I can't be 100% certain on that since I used it with 4" Mak and at F/13 it is certainly going to be easy on EP and I was pushing mag of what Mak can really deliver - but view did not fall apart and I was able to observe the moon regardless the fact mag went over x230. That EP was meant to be used in my other scopes (F/10 achro and F/6 newtonian) - but I still haven't used it like that, so above is very limited at best. I do have a feeling that it will be a good EP (I own a few more ES EPs - like 11m and 6.7mm 82 line and those are very good EPs but that is probably out of budget). In any case, from list that you made, I would personally go with ES62 5.5 (and I did at some point not long ago) - but do bare in mind that this is based on fairly limited experience.
  25. That combination is going to be rather slow for EAA. Sampling is 0.62"/px (very high resolution) and FOV is going to be tiny: You won't be able to fit whole M13 on the chip as it is 0.33° x 0.19°. If you want to use that camera, you will need some serious focal length reduction. Common thing to use is x0.5 FR in 1.25" format from GSO (and branded with other brands) - here it is from TS: https://www.teleskop-express.de/shop/product_info.php/info/p676_TS-Optics-Optics-TSRED051-Focal-reducer-0-5x---1-25-inch-filter-thread.html That item has FL of about 101-103mm, which means that it should be placed at about 51mm from sensor to give you x0.5 reduction factor. It will illuminate sensor the size of 385 chip, so you are good there. You can in fact place it further and get even more reduction - formula would be 1- distance / 102, so for reduction of x0.4, you would need to place it at: 0.4 = 1 - distance / 102 => distance / 102 = 1 - 0.4 => distance = 102 * 0.6 = ~61.2mm Since this reducer is simple two element one - it will have some edge of the field aberrations. Just how much, that will depend on how much reduction you make it. Alternative is to use dedicated reducer for that scope - and there is new one that will give you x0.4 reduction and is designed for SCT scopes, but it is very expensive, here it is: https://www.teleskop-express.de/shop/product_info.php/info/p11425_Starizona-Night-Owl-2--0-4x-Focal-Reducer---Corrector-for-SC-Telescopes.html 0.4 reduction is going to give you 500mm focal length and sampling rate will be 1.55"/px - much better and I would say upper limit for EEVA/EEA applications (on most regular nights you don't benefit from going higher res even on long exposure imaging with very good mount and guiding) and FOV would be much better: You can see that M13 is much better framed. Now FOV is just shy of one degree x half a degree, which is not far away from "perfect" EAA field of view of about 1-2 degrees. Alternatively if you can get it - x0.33 reducer by Meade would be even better option. Just a closing thought - 385 is rather good camera that will serve you as a planetary camera as well with that scope, so a good combo provided you can get it reduced to a factor of x0.3-0.4 for EAA.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.