Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

vlaiv

Members
  • Posts

    13,105
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. Collimating RC is not that hard. First thing is to identify what needs collimating. I can help you with some basics, but there are a few videos on youtube that will explain it a bit better. First thing is to see if star elongation is due to wrong collimation or there is something else at play. Inspect your subs, rotate camera then inspect subs again. If in each sub star elongation is pointing in the same direction across the field and it rotates when you rotate your camera - then you might be having a mount / guiding problem. 14" RC has a lot of FL, so you might be imaging at very high resolution - any sort of guiding / PA error will show, and show a lot. So if you happen to have above condition, examine which direction aligns with star elongation - if it is DEC, then you need better PA, if it is RA, you might be having problem with guiding (this would mean that your periodic error is not guided out completely). If you have round stars in one corner, and elongation in other corners it can be collimation related, and to do collimation do following: Step 1: Put star in center of the field and defocus it. Make sure doughnut is concentric - this you will achieve by collimating secondary. Step 2: focus the star in center of the field and then using some sort of aid check how much out of focus it is in corners (don't change focus, just slew scope and take frame and measure FWHM of star, or put Bahtinov mask on and look at defocus). At this point you can figure out if you need to collimate primary or focuser, depending how much defocus is there in corner stars. This is a bit tricky to get right, but software like CCD Inspector can help. If there is linear gradient in defocus (for example two top corners have same amount of defocus, and two bottom corners have same amount of defocus but different one to top corners) - you need to fix the tilt and that is done by collimating focuser. If on the other hand you have "bowl" like distribution that is not centered on frame center - then you need to collimate primary. After collimating primary you need to go to step 1 and repeat. After collimating focuser / tilt you don't need to do it. So it is best to leave tilt collimation as last, and only if you do indeed have tilt. Here is a good guide that will help you out: https://deepspaceplace.com/gso8rccollimate.php
  2. And there I was believing we have a convert given your recent experience with 6" F/6 Newtonian
  3. Interestingly enough, most of the people in this thread while standing up in defense of refracting telescope design (me included) did little to counter actual arguments presented in article. Most of the things listed in article are in fact true. I think that it would be in best interest of OP and general community that participants of this discussion either state exact disagreement with particular point made in article, or provide alternative view to why refractors are indeed good (particular use case or even personal preference). I would not focus my attention to author either, everyone has a right to voice their opinion and their particular style might not suit us well, but we should be able to distinguish their preferences / views to actual claims (which we can subject to counter argument).
  4. I agree with most of the article, except the title - that one is 100% wrong Does that mean that I don't have and enjoy fracs? Noo ... have two of them, and looking at third (will still be holding on to two, SW ST102 will have to give way ...). Do I have a dob? Yes, and RC also. While most of things listed in article are true, that does not mean that refracting telescopes are no good. Even achromatic refractors. I don't think that anyone would be displeased with 4" F/10 achromat on AZ4. Both for deep sky and solar system. Well if you enjoy observing and you are not after more, better, gooder ...
  5. Honestly, I don't quite understand what you said But I would like to point something out. 2.4 pixels per Airy disk radius is theoretical optimum sampling value based on ideal seeing and ability to do frequency restoration for those frequencies that are attenuated to 0.01 of their original value. This means good SNR (like 50-100 SNR) to be able to do that, as well as good processing tools that enable one to do it. In real life scenario, seeing will cause additional attenuation of frequencies (but not cut off like Airy pattern), that is combined with Airy pattern attenuation and cut off. So while ideal sampling will allow one to capture all potential information - it is not guaranteed that all information will be captured. On the other hand, I just had a discussion with Avani in this thread: where I performed simple experiment on his wonderful image of Jupiter taken at F/22 with ASI290 to show that same amount of detail could be captured with F/11 with this camera. He confirmed that by taking another image at F/11, but said that for his workflow and processing he prefers using F/22 as it gives him material that is easier to work with. So while theoretical value is correct, sometimes people will benefit from lower sampling - if seeing is poor, and sometimes people will benefit from higher sampling simply because post processing and tools better handle such data. So bottom line to this is that one should not try to achieve theoretical sampling value at all costs. It is still good guideline but everyone should do a bit experimenting (if gear allows for it) to find what they see as best sampling resolution for their conditions - seeing and also tools and processing workflow.
  6. Dawes limit and Rayleigh criterion are based on two point sources separated so that second is located at first minimum of airy pattern. It is usable for visual with relatively closely matched intensity of sources (like double stars). Applying twice sampling to that criterion is really not what Nyquist theorem is about. There are two problems with it: 1. It does not take into account that we are discussing imaging - and we have certain algorithms at our disposal to restore information in blurred image 2. It "operates" in spatial domain. Nyquist theorem is in frequency domain. Point sources transform to rather different frequency representation even without being diffracted to airy disk (one can think of them as being delta functions). Proper way to address this problem is to look at what airy disk pattern is doing in frequency domain (MTF) and based on that choose optimum sampling.
  7. In some of my "explorations" into this subject, I came with slightly different figure than usually assumed. Instead of using x3 in given formula, value that should be used according to my research is 2.4 So for camera with 2.9um, optimum resolution for green light would be F/11.2 Here is original thread for reference:
  8. Yes, unity gain included (it also avoids quantization noise).
  9. Quantization noise is rather simple to understand, but difficult to model as it does not have any sort of "normal" distribution. Sensor records integer number of electrons per pixel (quantum nature of light and particles). File format that is used to record image also stores integers. If you have unity Gain - meaning e/ADU of 1 - number of electrons get stored correctly. If on the other case you choose non integer conversion factor you introduce noise that has nothing to do with regular noise sources. Here is an example: You record 2e. Your conversion factor is set to 1.3. Value that should be written to the file should be 2x1.3 = 2.6 but file supports only integer values (it is not up to file really, but ADC on sensor that produces 12bit integer values with this camera), so it records 3ADU (closest integer value to 2.6, in reality it might round down instead of using "normal" rounding). But since you have 3 ADU and you used gain of 1.3 what is the actual number of electrons you captured? Is it 2.307692..... (3/1.3) or was it 2e? So just by using non integer gain you introduced 0.3e noise on 2e signal. On the matter of gain formula - it is rather simple. ZWO uses DB scale to denote ADU conversion factor, so Gain 135 is 13.5db gain over the least gain setting. So how much is lowest gain setting then? sqrt(10^1.35) = ~4.73 and here it is on graph: Ok, so if unity is 135 (or 13.5db or 1.35 Bell), we want gains that are multiples of 2. Why multiples of two? In binary representation there is no loss when using powers of two do multiply / divide (similar to decimal system where multiplying with 10 and dividing with 10 only moves decimal dot) - so guaranteed quantization noise free (for higher gains, for lower gains you still have quantization noise because nothing is written after decimal dot). DB system is logarithmic system like magnitude. So if you multiply power / amplitude you add in db-s. 6db gain is roughly x2. (check out https://en.wikipedia.org/wiki/Decibel there is a table of values - look under amplitude column). Since gain with ZWO is in units of 0.1db - 6db is then +60 gain. Btw, you should not worry if gain is not strictly multiple of 2 but only close to it, because closer you are to 2 - quantization noise is introduced for higher values (because of rounding) - and with higher values SNR will be higher (because signal is high already).
  10. Software should be able to pick up stars regardless of the gain - it is SNR that matters, not if star is visible or not on unstretched image. You also don't need to go that high in gain, unity should be ok - not much difference in read noise between unity and high gain. But if you want to exploit that last bit of lower read noise, use gain that will give you multiples of 2 of ADU to avoid quantization noise. That would be 135 + n*60 so 135, 195, 255, ...
  11. Can be done with flat darks only, no need for bias files. To be sure - do simple experiment. Take set of flat darks (just basically darks at short exposure - same that you use to get your flats). Then power off everything and disconnect camera. Power on again, use same settings, and do another set of flat darks. Stack each group using simple average stacking to get two "masters". Subtract second master from first and examine the result. Result should have average 0 and have only random noise in it - no pattern present. If this is so, you can simply use following calibration: master dark = avg(darks) master flat dark = avg(flat darks) master flat = avg(flats - master flat dark) calibrated light = (light - master dark) / master flat Note that lights and darks need to be taken on same temperature and settings, and flats and flat darks on their own temp, exposure and settings (which ever suit you to get good flat field). You can on the other hand check if bias is behaving properly by following: Take two sets of bias subs, and do the same as above for flat darks (two stack, subtract and examine). If you get avg 0 and no pattern - that is good. Now to be sure if bias work ok, you need to do the following as well. Take one set of darks of certain exposure (let's say 10s), and take one set of darks with double that exposure (so 20s, same temp, same settings). Prepare first master as avg(darks from set1) - bias. Prepare second master as avg(darks from set2) - bias. Then create using pixel math following image: master2 - master1*2 and examine it. It should also have avg 0 and no patterns visible. If you get that result then bias functions properly and you can use it (although for standard calibration it will not be necessary as you can use calibration mentioned above).
  12. I would not go with Gain 0 - too much quantization noise. Use Gain 139 preferably or Gain 79 if you want more dynamic range. Scale your exposure length so you don't get too much saturation. Even if you saturate bright stars, there is a way around it - just shoot some short exposures to recover signal in brightest areas. I don't think that dark current shot noise is really a problem. Look at this graph for ASI1600 At -20C dark current is 3.72e at 10 minute exposure, so associated shot noise would be 1.92e or order of read noise. That is low dark current noise in 10 minute sub. From the graph we can see that doubling temperature is around 7C for temperatures bigger than 0C, and even a bit bigger for lower temperatures. So even if amp glow is +7C warmer (and I would be surprised that there is such temp differential across the surface of the chip without causing mayhem), dark current in amp glow area will still be less than x2 dark current elsewhere and associated noise would be less than x1.41 - again on 10 minute sub, not something to worry too much about. For amp glow area to be x4 higher in dark current - thus having x2 more noise than elsewhere, that part of sensor would need to be almost 15C hotter than the rest! If peak temperature in amp glow area is not dependent on surrounding temperature, but on set point cooling only, then it calibration should work each time, but yes one would have to prepare master dark for each exposure length.
  13. I've seen it before, it looks like it is very common on Sony CMOS sensors. My ASI178MCC has amp glow that is cross between ASI1600 and that one on the image. Corners are affected like in ASI1600, but in each corner there is "source of rays". Here is what it looks like: And to my knowledge, my ASI178MCC calibrates ok regardless of strange amp glow. Ok, still not following (sorry about that). Well I do understand what you mean I just don't understand fully consequences of that. So you are saying, there are parts of the chip that are at different temperature than the rest of the chip? Right? Do you have any idea how that temperature behaves in time? 1. It is just a patch of sensor surface where temperature is higher then the rest of the chip but stable - let's say sensor is mostly at -20C but corners are at -18C 2. When there is no exposure going on, whole sensor is at -20C and when exposure starts, sensor remains at -20C but selected few patches in corners start rising in temperature 2a) they rise asymptotically to a certain value quickly (like in few seconds) 2b) they rise asymptotically to a certain value slowly (thus not really reaching that value in common exposure times)
  14. Not sure how is that related to amp glow if we assume it is due to build up of heat on adjacent component and it is being fed into sensor via metallic circuitry. Sensor is kept at constant temperature because all excess heat is taken away by Peltier. If amp glow is rising faster than dark current - that means that part of chip is at higher temperature than the rest of the chip, and it is accumulating heat (if it were only at higher temperature but stable we would not be in trouble, since there is time linearity at different temperatures). Some of that heat will indeed be taken away by Peltier via sensor, but if it is rising, it will also be in dynamic with its surroundings - some of the heat will dissipate elsewhere because not all of it is being drawn away by Peltier. Otherwise amp glow temperature would also be stable. And if it is dissipating heat by other means than Peltier - speed of dissipation will depend on temperature gradient between that component and wherever that heat goes to.
  15. Regardless of peltier cooling, if amp glow is produced by heat buildup - this means there is excess heat that is not being removed via peltier cooling. Such heat will be dissipated by regular means of heat transfer to surroundings (conduction and convection). Efficiency of such heat removal will depend on environmental temperature. Even if we look at camera as closed system, camera casing will be at different temperature (it is used to remove the heat from the system) depending on actual surrounding temperature. In cold environment casing will remain relatively cold. In hot environment casing will become hotter and hence everything inside casing will be at higher temperature, meaning less effective heat dissipation of amp glow producing component - bigger thermal build up in it - stronger amp glow. I guess temperature sensor does not need to be accurately calibrated as long as it is "reproducible". Whether I'm shooting at -18C while it says -20C, I don't think it matters as long as each time I tell it to go to -20C it remains on -18C
  16. On topic of CMOS amp glow: https://www.cloudynights.com/topic/599475-sony-imx183-mono-test-thread-asi-qhy-etc/page-3 There is some interesting discussion and data. One thing that intrigues me is ray like feature in amp glow in CMOS sensors. That is not something I would expect from heat. It looks more like there is some sort of electron build up due to electric / magnetic fields from adjacent circuitry rather than heat. Due to fast readout there are components in CMOS sensors that operate in Mhz range. Could this all be due to rouge EM radiation?
  17. Very good point, I have not look into it but it looks like I'm going to have to. There is another explanation why ASI1600 would not be able to calibrate properly with different exposure darks. It turns out there is sort of "floating" bias with ASI1600. Whether this is something implemented in drivers or it is feature of sensor itself, I have no idea, but it turns out that bias level drops with exposure length. I've seen while measuring sensor data that average levels for 1m exposure are lower than that of straight bias for same settings and temperature. I was under the impression that it stabilizes at some point, but it might not be so. Only thing that I've noticed is that after 30s or so average value starts rising - as one would expect due to buildup of dark current. If sensor indeed changes bias level depending on exposure length (and it seems so) it would also be a reason why different exposure darks would fail at calibration. One needs bias removed prior to adjusting dark current levels. On the amp glow topic - it sounds reasonable what you are saying but that would have another implication. There would be difference in amp glow between darks of same exposure and settings taken under different surrounding temperature conditions. Equilibrium in amp glow would depend on efficiency of heat dissipation from the system - which is higher in cold environment than it is in hot. I would expect amp glow to be less in darks taken on a cold night outside vs darks taken at room temperature. I'm shooting my darks indoors, and calibrate with them lights taken in at least 10C colder conditions. I've not noticed residual amp glow after calibration. I'm not saying that it is not there, I just have not seen it, and it might well be because I was not paying attention. If it turns out that there is indeed dependence of amp glow to ambient on same temp and settings, I guess one would need to have couple of sets of master darks taken at different ambient temperatures (like summer darks and winter darks and one set for spring / autumn or something like that). I don't really know the nature of amp glow in CMOS sensors. With CCD sensors it was indeed amp glow - since CCDs have single amp unit, and if that electronics was too close to sensor it would cause problems - heat travels via circuitry back towards the CCD (metal conducts electrons one way but equally good heat in opposite direction). With CMOS sensors there is amp unit associated with each pixel, so I doubt what we see and call amp glow is indeed related to amp unit(s) with CMOS.
  18. It is meant for both CCD and CMOS sensors, well, at least "well behaved" ones. It is proper calibration method in sense that it removes all but light signal from light frame and correctly applies flats (only on light signal). I have ASI1600 myself and that is what I use for calibration. On a side note, well behaved sensor is one that: - has predictable bias signal (read noise is random but bias signal is always same for every pixel) - has dark signal dependent on temperature and exposure duration (linearly grows with exposure length on fixed temperature). Can have uneven dark current (amp glow), but it needs to behave as said per pixel - has linear response to light.
  19. Yes you can skip bias frames, not only that you can skip them, they are not needed with set point cooled cameras, if you use darks of matching length and temperature. Calibrate like this: master dark = avg(darks) master flat dark = avg(flat darks) master flat = avg (flat - master flat dark) or master flat = avg(flat) - master flat dark calibrated light = (light - master dark) / master flat Use as much calibration frames as you can. Use at least 32bit precision, and that is it.
  20. Do you have exact matching lens adapter for ASI1600? Not all ZWO cameras have same T2/2" thread - sensor distance, so lens adapter should be adjustable for flange distance (which for Canon EF / EF-S is 44 mm). If adapter is adjustable is it lockable (like with screws?) it can cause tilt if not tightened properly, well, anything can cause tilt if not tightened properly (and sometimes even if tightened properly but manufactured to wrong specs).
  21. This looks familiar, I used to get star shapes looking like that, and problem is two fold. First is sensor / lens distance - you need to get it spot on for flat field. What you see in image above is mostly astigmatism. Right side of the image is in focus but has astigmatism - cross shape, while left has out of focus astigmatism - elliptical shape - bad sensor / lens distance. Second is that you have some sort of tilt in optical train - sensor is tilted in relation to focal plane. Right side in focus, left a bit out of focus. Ha subs still display this sort of behavior but it is much less pronounced because star shapes in corners when using refractive optics depend on wavelength of light. So in this particular configuration Ha (red part of spectrum) is less affected, but green would be a bit more, and blue still more than that.
  22. Yes, well, probably closer would be to say 1/4 horizontally (3888 is close to 4000) and 1/2.5 vertically. But do have a look at Stu's post from above to get the feel of the size of moon on image - it clearly shows how much of image would Moon take up.
  23. So you are interested in two things (if I'm getting that right): 1. how larger object would be compared to certain lens? Simple ratio of focal lengths of a given lens to 700mm of your reflector (provided we are talking about same camera used). 2. what would actual size of object be on your image? For this you need to know angular size of the object and pixel size of your camera (that is roughly sensor size divided with resolution, so in your case it would be 5.7um - or 5.7 micrometers, or you can look it up online). Then you need to use following formula: http://www.wilmslowastro.com/software/formulae.htm#ARCSEC_PIXEL To get resolution of image in arc seconds per pixel - in your case it is 1.68"/pixel Use that value, and divide target size (in arc seconds) and you will get number of pixels that target will be on image. So for Moon which is 30 arc minutes (that gives 1800 arc seconds) will be ~1070 pixels wide on the image.
  24. In all seriousness, telescope without eyepiece or camera is projection device rather than magnifying device. As pointed out by ronin, telescope forms image of object on focal plane of a certain size. That size depends on focal length of telescope and actual size and distance of object (or in other words - angular size of object). Magnification would imply conversion of scale of single unit - so we can say that Moon is magnified x60 if angular size of Moon is 30 degrees given that we see Moon without magnification to be 30' (or 0.5 degrees). Telescope on its own does not convert scale of single unit - it does unit conversion (projection) - from angular to planar / spatial so we end up going from angular size to length. Now eyepiece / telescope combination does magnification. Telescope / prime focus camera does not do magnification in that sense, it is also projection device - with addition of sampling rate. Sampling can be seen as number of pixels per mm or number of pixels per angular size - this is what gives illusion of "magnification" or zoom to an image. It is not really magnification or zoom, it is just sampling resolution. Same image can be both small or big, depending on monitor that we use to display it, and distance to observer - look at any image on computer screen from "normal" distance and then move 10 feet away - it will look much smaller (but it is the same image, sampled at the same resolution). So resolution is not measure of magnification / zoom in normal sense.
  25. So as you see, telescope is not magnifying at all! It is minifying - making Moon that is 3474 km in diameter, appear to be only 6.1mm!
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.