Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

vlaiv

Members
  • Posts

    13,034
  • Joined

  • Last visited

  • Days Won

    11

Everything posted by vlaiv

  1. You are better off doing it that way if you already bin your subs in hardware. In principle you don't need to do same binning of flats, but you should bin them afterwards in software. Most software that does calibration will complain if you give it mismatched size subs for calibration.
  2. Oh great, I did not know these existed. Btw, do you know if they are compatible with Mak102?
  3. Difference would be in the details ED80 is often recommended as a starter scope for AP because it is mostly hassle free - no need for collimation, you screw everything up and put it on mount and you are ready to go. Newtoninas are a bit cheaper, but need coma corrector (most people use focal reducer / field flattener with ED80 as well). They also need collimation and depending on quality of newtonian you may encounter some issues - like camera tilt if focuser is not sturdy enough and camera is heavy, and overall rigidity of OTA may be issue sometimes in newtonians not made for imaging (hence PDS version recommendation). They have larger cross section - so more susceptible to wind issues on less rigid mount. Back focus can also be issue if newtonian is not optimized for imaging. Other than that - they are usually longer focal length for imaging models - for example 130PDS (smallest imaging model) has 650mm focal length, while ED80 has 600mm focal length without reducer/flattener, and about 510mm with suitable FF/FR. But they also have larger aperture so gather more light. You can actually get short focal length newtonian by combining 6" F/4 with appropriate coma corrector with reduction, resulting in very fast setup, but price goes up and so does level of involvement (getting everything just right for it to work properly). Since you mentioned wide / narrow combination - I figured that you should go for 150PDS - it has 750mm focal length. Most DSLR cameras (more affordable models) have pixel size about 4.5um. That will give you sampling rate of about 1.3"/px - that is probably lowest that you should go with beginner/cheap setup so you can do mosaics for wider field shots.
  4. I wondered that as well for SW102 Mak (not that I have one, but wondered if I got myself one what would it take to change visual back to nicer one). I believe there are non standard threads. Depending how you measured the thread size (like using calipers and measuring thread diameter) - that could be M45 thread - meaning 45mm diameter and 1mm or 0.75mm pitch size - so its either M45x1 or M45x0.75 - you can use calipers to see what the pitch is. On the other hand, it could indeed be 45.4mm for some reason as I've found adapter for large SW/Orion Maks at TS website that says: Telescope-sided thread: 66,5mm Maksutov female thread Eyepiece-sided thread: SC Male thread So it is some crazy 66.5mm diameter. I would not be surprised if you indeed have some weird thread like 45.4 mm (but pitch is measured from the middle of grooves not outer diameter as per: at least I think so - not sure). Don't know if there are adapters to other threads for that.
  5. Well, cost is a relative thing and depend on how much you can afford / budget for gear, but in my view there is no cheap AP kit out there. Mind you, when I say AP kit, I assume taking images with a telescope. You can get fairly cheap setup for wide field imaging with camera and regular lens (which again is not quite cheap if you factor in price of camera like DSLR unless you go for second hand). Also, my price reference for cheapness is visual gear. You can get fairly decent visual setup for about $300, but you will likely need to spend at least x4-x5 to start thinking about AP. Having said all that, let's figure out what could be "cheap" setup to get you started. You mentioned wide/narrow field, and let's discuss that first. Most people use different kit to do wide field shots and narrow field (close up) shots, but you don't need to do it that way. Maybe best approach is to start thinking about what would be narrowest field that you can achieve that will work well (this requires you to think about almost all components of imaging gear - camera/sensor/pixel size, telescope used and of course mount). Once you figure out what is the max "zoom" that you will be successfully using, then wider field can be accomplished by doing mosaics - shooting multiple panels and then "stitching" them together to form larger FOV. This is most cost effective way to get both narrow and wide field. In theory there is option to go other way around - using wide field scope and drizzling to get higher resolution, but I'm afraid that it simply does not work for amateur setups (although people continue to use it, but that is deep discussion). In my view it's better to go with narrow and do mosaics (a bit more involved, but it gets you results). First things first - mount, that is most important component. You want the best mount you can afford. In budget department you have couple of options: EQ3, EQ35, EQ5. While AP is possible with these mounts, if you can afford to get better mount - please do. With these light weight mounts, you'll spend much more time getting your mount to work to your liking (some people like to do that - get the max out of their gear). Next, true "starter" is HEQ5. Next is the choice of camera and good matching scope (depending on mount). For all "lightweight" mounts (eq3, eq35, eq5 and alike) you want to limit your self to scope like 130PDS. If you can afford HEQ5, then good scope would be 150PDS. Here things start to branch out and there are many possibilities leading to different budgets. Maybe if you start by telling your budget, we will be able to suggest best option for that?
  6. You are very welcome. Don't worry if you don't have time right now - I'm subscribed to this thread and you can also "mention" me when you have the time to upload files so we can figure out what is going on here.
  7. I've examined 32bit fits and indeed background is a bit lower - at 39ADU (actually about 38.9). I guess this is because added precision. I'm worried about number of zeros, and it indicates something wrong in calibration workflow. You can get exactly 0 pixel value under some circumstances - hot pixel being one example. Let's say that in certain exposure pixel saturates because it is hot pixel - it will have max_value (65535 or whatever max value for ADC is). All dark subs will have this value - max_value, and their average is also max_value. Light sub will have this value in particular pixel and when you subtract master dark you will have max_value - max_value for that pixel and that is always exactly 0. I don't think it is possible in your case. Too much of pixels have 0 value, and also you are binning x2. If one pixel is hot, binned with other pixels that are not hot will result in lower value - so it won't be able to produce exactly 0. It is highly unlikely that 4 adjacent pixels that are binned together are all hot pixels (this would produce "hot binned pixel"). Let's look at histogram of your calibrated sub in detail to see if we can spot anything interesting that would explain the problem: Ah yes, it is immediately obvious that zeros are result of "left clipping". This could be due to calibration - all values below zero are clipped at zero for some reason (this happens for example when one is using 16bit unsigned values when calibrating - that format can't hold negative values, or it might be internal problem with calibration software). It can possibly happen because wrong camera settings - if offset is set too low. Not sure if your camera even allows for software adjustment of offset value, or is it set to proper value in factory and can't be changed. Let's see if we can figure out what is going on (if you wish of course) - could you please post two more subs: - single unprocessed dark sub directly from camera - master dark, again as 32bit fits.
  8. It is in fact right. Not that you should try such exposure - but in 6.8 hours, LP noise is going to be 5 times as large as read noise. Like I already mentioned - Narrow band imaging is most demanding in terms of exposure length because narrow band filters do excellent job of removing LP and associated noise. For best results in narrow band - do the longest exposure that you are comfortable with - you've mentioned that some people do 30 minutes exposures - this is the case for such long exposures - dark skies, narrow band and large read noise camera. I checked your calibrated sub, and I have suggestion for you. For some reason frame is 16bit. If at all possible make all your processing in 32bit. It is true that single sub will be 16bit prior to calibration, but as soon as you start calibration process, you will need more precision than 16bit provides. Using lower bits per pixel just introduces (bad kind of) noise to your image. When making master subs for calibration do this also, don't make your master dark, bias and flat files be 16 bit - process your data in 32bit format. I've checked the tiff (another tip - use fits format, I know many people prefer tiff, but fits is meant as format for recording / transferring and processing astronomical images, so most astro software will support it), and it has background level of about 39-40ADU, so yes, you will benefit from large exposure times for NB. There are couple of things that I find odd with tiff sub you attached - namely 0 values of some pixels. This is very odd - to get exactly 0. I would expect no zeros, or at least just a few zero values, but this sub contains a lot. It means that either background LP level is very low (and in principle it is) - but then I would also expect at least some negative values, or all values to be positive. It might also be "artifact" of 16bit tiff format - maybe image is "shifted", so measured background value is skewed by this. If I try to do "sanity" check, here is what I have: I took a patch of "empty sky" and measured signal in electrons - it is around 18e. I also measured noise in this part of the image and it is ~18.5. Now to check if this "adds up" we need to see how much noise 18e of LP level + 8.7e of read noise is going to produce (and add a bit more due to calibration). It will be square root(18e + 8.7e^2) = ~9.68e There is much more noise in the background of the image than there should be if background level is proper. Could you do a calibrated frame in 32 bit (with 32bit masters) and post it as fits to run the same measurement again?
  9. Ok, let's go over specific measurements. I mentioned bin vs not binned in case you use the same telescope for regular (not binned) and you want to calculate exposure time for binned version. In above case, you are already shooting binned with C14 and regular with 4" APO, and I guess you intend to continue doing so, therefore no need to multiply things with 4 and do conversion between non binned and binned background levels. For 4" APO, you say you get around 3200 ADU for 600 seconds. We calculated that you want to get to about 4100 ADU background level. This means that good exposure value for 4" APO would be 600s * 4100 / 3200 = ~768seconds or about 12 minutes and 50 seconds. This is not optimum solution, but it does tell us that you will get slightly better result if you shoot 12 minute subs instead of 10 minutes subs. For C14 we have 300s with ADU of 1400. From this good exposure value will be 300s * 4100 / 1400 = ~878 seconds or about 14 and a half minutes of exposure. Don't know why you shot NB image of M13, it is globular cluster and there is no significant signal in emission lines, but we can do the same: 120s * 4100/800 = 615s or about 10 minutes. I'll explain a bit more couple of things. First, when I wrote above about swamping read noise, I used rather arbitrary figure of x5 for LP noise vs read noise. I've seen this figure used in calculations, and it makes sense to use because following: Suppose that you have 1 unit of read noise and x5 larger LP noise, or in this case 5 units of LP noise. Total noise will be (according to noise addition, and not including other noise sources): square_root(1^2 + 5^2) = square_root(26) = ~5.1. This shows how much LP noise swamped read noise, as there is almost no difference in noise level of LP and LP+read noise - 5 units vs 5.1 units, very small increase. However, like I said, factor of x5 is arbitrary - which means that above calculated exposures are not "optimal" or something like that - they are just good guide line. If you have 12 minutes and 50 seconds as result of calculation - you can use 12 minutes or 13 minutes - what ever suits you (do some value that you will use across the scopes, so you don't have to build large dark library with many different exposures). Second thing that I wanted to explain is above calculation of sub duration in better terms for easier understanding. It is just a simple ratio when you think about it - let's again use APO example. You measured background level of 600 seconds exposure to be ~3200 ADU. This means that background signal is "building up" at 3200/600 ADUs per every second, or 5.3333ADU/s. Since we've seen that for our coefficient of x5 (which means LP noise needs to be about x5 in magnitude compared to read noise) this means ~4100 ADU, and just to iterate, read noise of your camera is ~8.7e, five time larger value is 43.5e and we need LP noise level to be about that number. LP noise level is equal to square root of LP signal, so we need LP signal to be square of ~43.5 = ~1892.25e (I rounded it up to 2000 above). Last thing that we need to do is convert electrons to ADU, and that is what gain value is used for. Your camera gain value is 0.485 e- / ADU, so in order to get ADUs we need to divide electrons with gain factor - ~2000 / 0.485 = ~4123 so I again rounded it to 4100 (you don't need all the rounding, but it was easier for me to write round numbers instead of typing in precise numbers from calculator). Back to our exposure time. We have LP level build up of 5.33333ADU/s. How much time it takes to build up to 4100ADU? Well, that is easy, 4100 / 5.33333 = 768s. Again you don't need to be very precise and do 12 minutes and 48 seconds - either 12 minutes or 13 will do. Makes sense?
  10. I don't know what software are you currently use for processing, but most astro software out there offers this functionality. You simply need to select a piece of background, trying to avoid stars and background nebulosity / galaxies and do statistics on it - or more precisely average value of selection. That is all it takes. You take that value from your calibrated sub and according to this: http://www.astrosurf.com/buil/qsi/comparison.htm your camera has: 0.485 e- / ADU and 8.7e read noise. This means that you should expose until you get about ~2000 electrons of background LP signal ( (8.7*5)^2 ), or converted in ADU (values you read of from sub) - ~4100. If your sub has lower value than this - increase exposure length, if it has higher background value than this - you can lower exposure length. In fact you can do this from your old subs - find a calibrated sub prior to x2 bin and measure background value and then multiply it with 4 (because when you bin you will get x4 higher background because values of 4 adjacent pixels add to form a single value). If this value is less than ~4100 increase exposure length, if higher - reduce exposure length. You can even calculate needed exposure length by using a bit of math. If for example you are using 10 minute subs and you find that measured value is 1200ADU (for one of past not binned subs), meaning it will be about 4800 when binned, then it leads that proper exposure would be 10 minutes * 4100/4800 = 8 and a half minutes (~8.51 minutes). Hope this makes sense. Actually no. Fewer longer subs will always have higher SNR than more shorter subs, for same total integration time. With above overcoming of read noise we just make sure that difference is too small to be of any practical significance. At some point difference becomes too small to be important, but still fewer longer subs will have better SNR than more shorter subs, only that difference will be something like less than 1%. If you observe what impacts sub duration, then it will be obvious why some of best imagers use longer exposures. - Sub duration depends on read noise - higher read noise, longer sub needs to be - Sub duration depends on LP levels - more LP you have, you can get away with shorter exposures. Most best imagers use CCD sensors (they are long enough in this to have invested in cameras before CMOS became available / popular), and CCDs tend to have higher read noise than CMOS sensors - often as much as 10e or so. This means that CCDs benefit from longer exposures. Also, most of best imagers shoot from dark skies (at least they try to), which means that LP levels are low - again promoting longer exposures (you need to wait longer for LP signal to build up to for its noise to become significantly larger than read noise). Add to that the fact that you've often seen NB exposure times (just did not pay attention to type of image vs sub length). Narrow band additionally cuts down LP levels - thus requiring longer exposures. Add all above together - CCD with high read noise, dark skies and 3nm Ha filter and you can see how it leads to optimum sub length being more than hour.
  11. I'm a bit skeptical of that item. No particular reason, but just the way it operates ...
  12. Don't worry about exposure time in relation to binning or saturation of stars. You should base your exposure length based on few other factors: - how big is read noise compared to other noise sources - mainly LP levels. As soon as any other noise source becomes dominant, there is not much point in going longer. You can measure background levels in your exposure (best after calibration, per filter) and then determine if exposure is sufficiently long. You want your LP noise level to be at least 5 times more than read noise (so square root of background signal should be about 5 times as large as read noise - do pay attention to convert from ADU to electrons). - In contrast to above, you want more exposures, so each should be as short as possible. This is consequence of several reasons. Most algorithms work better when having a lot of data (statistical analysis fares better) so you want larger number of subs in your stack. Shorter exposure means less imaging time lost if something bad happens - like airplane passing, wind gust, cable snag, earthquake (yes, this is a real thing, at that focal length you will record even small tremors). So balance the two above - go long enough to defeat read noise, but don't over do it as you will benefit from larger number of subs. As for saturation, this is easily dealt with by using just a couple of very short subs at the end of the session (or per filter) that you will use only to "fill in" star cores. With star cores, or any part of image that saturates - signal is already very strong (otherwise it would not saturate sensor). This means that SNR is already high and you don't need many subs stacked to get good result. Just a few of 10-15s exposures will deal with this. After you stack - just select brightest stars and "paste" same region from scaled (you need to multiply linear values with ratio of exposure lengths) short stack. In principle, luminance does not need "fill in" subs, as star cores end up saturated after stretch anyway. You do need color fill in though - you need to have exact RGB ratio to preserve color.
  13. Ah yes, B seems to have very low QE at 656nm (Ha wavelength) according to this graph: Green is not much better either in comparison to Red, but it is at least three times better than Blue.
  14. Same thing - diffraction spikes. We see them readily on bright stars and there they look, well, like spikes. Now imagine that planet consists out of bunch of stars (it is not solid but rather has "points" that make it). Imagine diffraction spikes from each of those stars overlapping. They will form exact pattern that you are seeing. Here is a little diagram to explain it a bit better: (sorry for poor star shapes, my drawing skills are in need of collimation )
  15. To add to excellent answer above, you want to pay attention to following when choosing planetary camera for your scope: - required focal ratio for optimal detail depends on pixel size, so take that into account together with any barlows and F/ratio of your scope (smaller pixels require lower F/ratio) - best sensor will have following characteristics - high quantum efficiency, fast read out speeds (so you can achieve high fps and capture as much frames as you can) and low read noise. One should choose best camera based on above listed criteria - pixel size for optimal detail, high QE, fast readout (which means high FPS and USB3.0 connection) and low read noise.
  16. Different things. Lens and telescope when used at prime focus (in this case also acting as a lens) is a projection device. It transforms angle into linear distance. Star at a certain angle to optical axis will be focused at a certain distance in focal plane. Telescope + eyepiece is more like angle amplifying device. Light coming out of the eyepiece is not converging, it is still parallel rays but at a larger angle. So star at a certain angle to the optical axis will end up as being parallel rays exiting eyepiece at larger angle (ratio of angles being equal to magnification you are using). Human eye is also projection device much like lens or telescope at prime focus - it converges parallel beams (for object at infinity) to a spot on retina and photo sensitive cells pick this up and form a signal. Actual image is a projection on your retina in the same way lens projects small image onto camera sensor surface. If we extend this analogy - telescope + eyepiece is more like teleconverter then camera lens - it changes magnification/focal length of human eyeball. Because of this, different maths applies to calculating things. With lens it is focal length that is proportional to size of projection, while with telescope+eyepiece (being compound lens akin to teleconverter), magnification is determined by ratio of focal lengths of telescope and eyepiece.
  17. Depends on focal reducer and star diagonal. Photographic accessories often use T2 thread for attachment, sometimes M48 (2" filter thread) but often other larger threads. Star diagonal is either 2" or 1.25". There are models with T2 attachment as well. There is one more important consideration - focal reducers that also act as field flatteners work on specific distance to the sensor (or rather in visual case - focal plane of telescope). This means that you need to place it at exact distance to eyepiece for it to operate to specs. Problem with this is that eyepieces are not par focal - some need a bit more in focus travel, some need a bit more out focus travel. Ideally, eyepiece should have its focal point at the "shoulder", but not all eyepieces do. Once you know the working distance and type of thread of field flattener / focal reducer, you need to make sure it is mounted at proper distance to eyepiece for it to work as intended. If you miss the distance, you will get different aberrations (astigmatism for example). This is why visual coma correctors often have tuning mechanism, to allow you to alter distance to eyepiece so you can find proper distance for each eyepiece that you use (CCs are also sensitive to proper distance to focal plane). For that particular reducer, thread size is M48, so it should screw in 2" diagonal (one that has 2"/M48 filter thread), however, optimal working distance is only 55mm - that is enough for AP applications, but not nearly enough for visual as most 2" diagonals are 100+mm of optical path. Only way I see it being used is in straight thru configuration (not sure if anyone would like to use long refractor in straight thru configuration), with suitable 2" extender and 2" eyepiece.
  18. It would be the best of both worlds. It won't have that much aperture disadvantage over ST120, and a bit smaller field (not sure if focal reducer will be useful visually - it is intended for photographic applications), but it will show planets better than each of original options. 8" dob will out perform it on all counts (Except perhaps slightly wider field from ED100).
  19. Yes, you have it right - split each calibrated sub into 4 smaller images, each containing one "color". You will get red sub, two green subs and one blue sub. This should be done without any debayering, or rather it would be similar to super pixel debayering, where you get R, G and B for one pixel out of each group of 2x2 pixels (only difference to what I've proposed is that two green ones are summed up in single sub instead of having two subs for green pixels). After that you should stack each of these "colors" (or sub frames) together into single stack (not to different "color" stack). Only thing that you should be careful about is to use stacking method that is capable of "weighing" each sub based on its SNR. This is because each of these sub images from single light has different SNR because sensor has different sensitivity in Ha wavelength for each color, so signal will be of a different strength and hence SNR will be different. For optimum results some subs should be included "more" (those with better SNR) and some subs "less" (those of green and blue pixels, because they have lower SNR). With Ha filter you are capturing the same signal in each color - so you will always get monochromatic image (either in shades of gray or in shades of some other color) and there is really no point in trying to assign different "colors". You can't get red color in some parts of the image and blue in other parts - you will always end up with purple in both places (or some other color that is same ratio of R, G and B but only differing in intensity). Hope this makes sense.
  20. If you used only Ha filter on your camera, then you would benefit from a different stacking workflow. Since this camera (and other OSC cameras) are sensitive across the wavelength range in each color - each color will capture Ha wavelength but at different quantum efficiency (blue and green at something about only 5% or so). In order to get the smoothest possible image - you need to separate colors and you need stacking algorithm that will be capable of handling quite different SNR subs in single stack. Don't expect to get color image out of it, but you should be able to get fairly good mono image out of it. I did a quick stretch of red channel only and it does show quite a bit of detail:
  21. I was going to make similar joke to Rodd's observation about uniform dots being stars, but yes indeed, there are very tiny white like artifacts (resembling dots, but in some cases - more like arcs). I'll point it out, and I believe it is artifact of noise reduction process - place where both noise reduction and sharpening is applied or something similar.
  22. Yes indeed - bubble looks marvelous now, very sharp, so does surrounding gas clouds. Outer nebulosity is indeed faint, but I guess it needs to be - it is probably very faint in comparison to bubble and surrounding gas clouds. To get that part smooth - much more exposure is needed, but I don't think it would be feasible and it does not detract from the target much.
  23. Depends on intended targets. ST120 has rather good optical quality for its intended purpose - viewing DSOs at relatively low magnification. It will show more, and go deeper than 80ED as it will gather x2.25 more light. At low magnifications, problems with CA will not show. You will see purple halos only when looking at the brightest stars. Other than that, objects of deep sky will not look worse in ST120, on the contrary, you will be able to see deeper. On planets, story is much different. ST120 will give much poorer results than ED80. Although larger aperture will resolve more, CA blur will simply ruin this and ED80 will show more details and views will be much more pleasing.
  24. I'm running a risk of over explaining things here (too much info), or already telling you things that you know, but here we go: At 0.66"/px you are over sampling - this leads to poorer SNR and less sharp stars when image is viewed at 1:1 (one image pixel to one screen pixel, or 100% zoom). Depending on how good your guiding is, given your aperture and usual seeing conditions, you want to sample somewhere between 1.3"/px and 2"/px (depends on particular night). With cmos sensors there is a simple way to do this - software binning. You can either bin your subs x2 or x3 prior to stacking (or even bin your stack while still linear, but I think it is slightly better to do it on subs). You can also downsample your image. Both binning and downsampling will not have impact on image posted on forum (it won't change the FOV) - browser already downsamples them because image is large, unless you bin it to extent that it becomes smaller than browser shows it on forum. There is difference between binning and downsampling however. Both do the same thing but to a different extent. Both produce coarser sampling rate, and both increase SNR. SNR increase is a bit different, and depends on downsampling method used. Binning always provides predictable increase in SNR - by factor of how many pixels you bin. x2 binning will increase SNR by 2, x3 by 3, etc ... SNR increase with downsampling depends on downsampling method used and is always less than this maximum that binning provides. Downsampling also introduces cross pixel correlation that binning does not (not sure if you should care about this, but I pointed it out as a difference). There are more differences between the two, like impact of pixel level blur, etc ... It is worth doing however - when image is viewed 1:1 it will be sharper looking, although smaller in scale (not so zoomed in), but main benefit will be SNR increase. SNR increase will be visible on screen size image (like here on forum) as well, because you will be able to stretch more and show "deeper". I would recommend that you bin your image x2 regularly, and x3 on nights of poor seeing. You can try this with existing data (not sure what software are you using for stacking and processing), but do software bin x2 on your calibrated subs and then restack. Process result to see if you can go deeper. Also examine how it looks when viewed at screen size (like here posted on forum), but also what it looks at 1:1. Btw, there is really simple way to observe how downsampling increases SNR - just look at your image scaled down to be displayed on forum, and also at 1:1, here are screen shots of both: See how background looks more noisy on bottom image? They are the exact same image, but top one has been downsampled and background is much smoother because of this.
  25. Btw, here is a random screen shot from a google image search on this galaxy: If you compare the size of galaxy and size of "star bloat" in this and your image - you will see that you are not far off.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.