Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

vlaiv

Members
  • Posts

    13,016
  • Joined

  • Last visited

  • Days Won

    11

Everything posted by vlaiv

  1. good point, but here is "counter argument". First, you don't need someone else spinning for you to spin - you can eject particles and have your self spinning. Now imagine accelerating expansion of universe and the fact that one can disappear beyond the horizon. You shoot some photons from the flashlight attached to your head - like observing light - one that is directed perpendicular to your body. It sets you spinning and photons shoot off. At some point these photons will be causally disconnected from you once they cross event horizon. Now you are left alone in the space - spinning without reference point I know - not really realistic scenario, but still...
  2. Oh dear, Not sure what happened here - ASI1600 PRO camera has 4656×3520 resolution. Sub that you attached in your post is in fact: 5496x3672 pixels. It also says from fits header that it has 2.4um pixel size - while ASI1600Pro has 3.8um pixel size. Stats from fits header correspond to ASI183 - did you attach wrong sub by any chance, or maybe you made mistake in model of your camera? In any case offset is wrong, there is clipping to the left as histogram of your dark sub shows: There is something wrong with sub regardless of all of that - here is what it looks stretched - almost half of it is very brighter than the rest. It should not look like that. It might be due to capture application, but there might be a fault in camera. What capture app did you use?
  3. Could be due to low offset. What offset value are you using?
  4. I gave above example to show couple of things. First one is very "mundane" - same way objects in gravitational field follow curved trajectories - one's legs following a curved trajectory while spinning, will experience "force". Neither of two "forces" need force carrying particles to be exchanged to give raise to force - both are consequence of "curved space-time". We might as well call them "pseudo" forces (as one often does for centrifugal force). Other one has really baffled me for as long as I remember. Straight uniform motion is relative. If you are floating in empty space and don't feel anything, you have no way of knowing that you are "moving with respect to something". If you are accelerating along a direction - you will feel that as a force in your reference frame - but you need energy expenditure to do so. Again what is the thing that you are accelerating with respect to? But rotation around one axis is strangest of all - if you are spun and left on your own - you will feel above mentioned forces, although same would happen in completely empty space with no reference point. You are rotating with respect to what? And you no longer even have energy expenditure unlike in accelerating case. All of this "sounds" very counter intuitive, but I think there is very "reasonable" explanation. Not sure if anyone actually thought of it that way, maybe this is the path for GUT. We need to examine how waves behave in certain space/time configurations, and I'm sure that all of the above will emerge from behavior of waves. Math of it will show all effect. After all - everything that exists is in fact wave in quantum fields.
  5. Let's for the moment put gravity aside, and consider following example: You find yourself in empty space, far from effects of gravity of other bodies and you are imparted a spin along axis that goes thru your chest (front to back). You suddenly feel that "something is pulling" quite strongly on your legs and there is similar sensation in your head but to lesser extent. There is no force applied, yet you feel tension throughout your body - "something is stretching" you, and faster you spin, more uncomfortable it becomes. What is this force that is doing this? There is no force, yet you feel this effect.
  6. DSS has two tabs - one for fits files and one for raw files (those from DSLR), are you sure you are looking at the right tab?
  7. Well, fact of life is that OCS sensors in fact sample at half the resolution - nothing to do with the problem you are facing, but I just wanted to mention that. What you have here is debayering using super pixel mode - which combines 4 adjacent pixels into single RGB pixel value. Other debayering algorithms work by interpolating missing values (but can't restore missing information in high frequencies so you get your image at "native" resolution but still sampled at lower rate). If you want larger image - select some other debayering algorithm.
  8. Huge difference in observing from Bortle 3/4 and Bortle 7/8. I'll give you my experience. Bortle 3/4 100mm ST102 scope could easily show at least 4 larger galaxies from Markarian's chain. Bortle 7/8 8" Dob will struggle to show M51 (just two cores and you have idea that something might be there), while M81/M82 could be seen most of the time but very faint. As a contrast same scope at Bortle 3/4 and same M81/M82 targets - I was under impression I was looking at car headlights . M51 showed spiral structure and the bridge and looked almost like on images. So even 8" in heavy LP won't be able to render anything close to 80mm under dark skies.
  9. How much back focus you have is only important if you can't reach focus for needed sensor - flattener distance. I think you should have no problems with amount of back focus needed as flatteners often move focal point inward (particularly those that act as reducers as well).
  10. Technically it's astigmatism Angle is too small to make elliptical cross section of converting light cone (there is small contribution of this effect, but here primary effect is astigmatism). There is in fact a bit of coma from what I can tell - pure astigmatism is symmetric aberration, so star shape should be elliptical, but here there is a bit of coma in the far corners. In any case, this mix of aberrations is due to wrong distance of flattener to sensor.
  11. Distance is the issue for sure. Actual prescribed distance for flattener only works if flattener is matched for that particular scope - otherwise it is general guideline and one should try different spacing to get good results. Above diagram should be right in principle, but again depends on optical configuration of field flattener. I don't have enough knowledge on different flattener designs (if there are indeed different configurations) to be able to provide more detailed insight into all of that but I suspect that both things: spacing and above diagram should be taken as guideline more than fact unless specified exactly for flattener / scope combination by manufacturer. In any case, answer is trial and error. There might even be a case where you can't find "perfect" distance - there is always some level of aberration in the corners. This can happen if flattener does not correct large enough field to cover whole sensor. 16200 is full frame sensor with 34.6mm diagonal - certain flatteners might have difficulty to correct for stars that far off axis (not saying that TS one is such flattener - but it can happen).
  12. Well, I still don't understand t test completely (will need a bit more reading), but here is quick calculation of it with online calculator: I just took set of bias subs from other camera, split into two groups, calculated mean ADU value per each sub and treated those numbers as members of each group1 and group2 for t test and performed calculation. Result speaks for it self - it looks like these two groups of bias files (although from the same camera taken in succession) don't belong to same distribution. Not sure if this is the test you had in mind, if not, can you expand on what I should do?
  13. I thought about that, but can't figure out how to properly do calculations. My simplistic (although might not be accurate) method would be: Bias noise per pixel is order of 10e. There are about 8.3Mp on this sensor, so average value of all of those pixels should be some value +/- 10 / sqrt(8300000) = 0.0035e So for 99.7% of time, one would expect average of bias (average value of all pixels in bias) to be some value +/- 0.01e (3 sigma). This indicates that for most part, if it were only for read noise of pixels, average bias value should be fairly constant. There seems to be some other noise component related to bias in case of ASI1600, as similar calculation performed on my darks would give something like 0.001 sigma, so most frames would be +/- 0.003 yet as we have seen from above table, most values are in range ~63.945 - ~63.971 so actual range is about 4 times as large, or +/-0.013. Having said all of that, and comparing ASI1600 results with my rough calculation - here bias varies by two orders of magnitude more then expected in case of these two cameras. Other explanation is of course that my approach to estimating variance in mean bias value is completely wrong
  14. Here is what should be the difference for camera 1 - "normal" subs and "higher ADU" subs: I stacked normal subs to one stack, higher ADU subs to other stack (both average stacking method) and then subtracted resulting stacks. There is obvious gradient in difference of the two. Here is again comparison of the two groups - side by side, this time using stacks instead of single subs (one stack has 13 subs, other only 3 and is therefore more noisy). Higher value stack was "shifted" to get same average ADU value, and same linear stretch again applied. I added dark background so any edge brightening can be seen more easily: What could be causing the difference? For a moment I suspected light leak of the sorts, but doubt it would be visible in 0.001s bias subs and not in 300s darks.
  15. It does not look like there is significant difference, here is one sub with "normal" mean ADU level and one sub with higher ADU level side by side. I subtracted uniform level of 6.6 from higher ADU level one and applied same linear histogram stretch to both: FFT of both frames show pretty much same thing (this is central part of each FFT next to each other):
  16. Recently I had a chance to do some measurements on bias subs from CCD sensors (two in total) and I've found something that really confused me. At first it seemed like issue with power source, but second measurement on one of the sensors with what could have been power issue corrected - gave similar if not even worse results. I have not taken subs myself, but have no reason to believe there were any issues with acquisition (-20C, 0.001s exposure, no light - regular "protocol" for bias subs). My intention was to measure read noise, so idea was to take 16 bias subs - split into two groups, stack, subtract and measure standard deviation of result (corrected for stacking addition). I expected this sort of bias - bias calibration to provide nice uniform noise with mean value of 0. Neither set of data actually produced such result. There were residual mean value quite different than 0 in each case. This should not be the case so I inspected mean values of bias subs in each run and found what I can only describe as instability of the bias. Here are results: Camera 1, 16 bias measurement, nothing done with data (no e/ADU conversion or anything - values in table are measured ADU values straight from 16bit fits): I pointed out funny results. I would expect bias mean value measurement to roughly behave as other 13 subs in this list - slight variation around some value - in this case 363.4 or there about. What happened with those three outlined subs? Camera 2, again same measurement done on 16 bias subs as with above camera (same sensor, but different vendor hence different offset and e/ADU values - resulting values will differ but I expect again same to be true - bias mean value should be very close to some value across measurements): Here we see much more scatter and larger deviation. Mean ADU levels per sub vary from ~661.5 up to ~655. Not something that I would expect. Standard deviation varies much less than mean ADU value, so read noise remains pretty much the same. Camera 2, second take with different power supply conditions (and probably changed gain since mean values are larger than in first batch of subs from that camera, but stddev is lower): This one is even more interesting - very large variation in mean ADU values - and almost sorted in descending order - difference of about 20ADU from first to last bias sub. Set of associated darks show similar thing (taken at -20C, 300s exposure, again 16 of them used): Camera 1: This time only two subs out of 16 had +6 ADU increase (same thing as with bias for this camera), while mean value of other subs is relatively stable (there might be slight trend to rise - probably due to cooling). Camera 2 (set1 and set2 - same 300s, -20C): One can say that these variations in mean ADU level of darks are associated with cooling (how fast it reaches set temperature and if there is overshoot), but I don't think they are - look at noise values - they don't follow the same trend. It is bias related. I don't own CCD camera, nor I have ever worked with one, but think I have fairly good understanding on what should be expected in above tables. As a comparison here is similar measurement done on 16 subs from my ASI1600 at -20C, unity gain, 240s exposure: Mean values are nice and "uniform". Anybody have a clue what is going on with above CCD sensors and their bias?
  17. Yes, I'm well aware that you introduce additional noise when doing calibration. You can however control how much noise you introduce. You add both dark current noise and read noise back in when you use master dark. If you for example dither and use 64 dark subs to create master dark you are in fact raising both dark current noise and read noise by 0.8%. Too much? Use more dark subs to create master dark. Is there actual reason to use dark calibration if dark current is low and uniform? Yes there is - without removing dark signal (even really small like less than 0.5e per sub) you will have wrong flat calibration. Flat calibration should operate on light signal alone. Not removing dark current makes it operate on both - you will be "correcting" what is uniform offset of dark current thus creating "imprint" of master flat on your image.
  18. Why do people choose not to use darks with set point cooling is beyond me . Not much to be lost if one uses cloudy night to build master dark. As for original question, 460ex has about 0.0004 e/px/s at -10C per original specs - and that is really low dark current. It means that in 10 minute sub, average dark current per pixel at -10C will be 0.24e and associated noise will be less than 0.5e - much lower than read noise of said camera which is 5e. Sensors have doubling temperature of about 6C, so going to -16C will slightly improve things - 10 minute sub will be about 0.12e of dark current and associated noise will be ~0.35, so not much improvement over 0.5e. It is same for going warmer - at -4C dark current will be about 0.48e and associated noise ~0.7e - again not anything to be worried about.
  19. If you want to do proper calibration - yes you do. There are certain cases where you can get away with not using darks, and in case when one does not have set point temperature - it can be difficult do properly apply them, so you need to be careful. Not doing darks can lead to problems with flat calibration. You need to try different scenarios and see which one works for you. If you want to try without darks - use bias subs instead whenever darks are required (for dark calibration of lights and for creating master flat). If you decide to use darks then you should do dark optimization (there is option in some software that does this) - it is algorithm that tries to compensate for mismatched temperature in your darks. It is also not bullet proof technique and it will depend on characteristics of your camera. Some cameras have issues with bias subs and can't use dark optimization / dark scaling.
  20. Exposure length depends on several factors. There is a limited amount of signal that sensor can accumulate, and if you "over do it" - it will end up saturated and actual signal in saturated areas is lost. This happens on bright parts of target and star cores in long exposure. For this reason you go with additional short exposures to fill in missing signal - like Olly mentioned. Apart from those short "filler" exposures, best exposure length is as long as you can. Fewer long subs will give better result over more shorter exposures for the same total integration time. However at some point you get to a place where difference is so small that it is not worth doing longer - any difference that exists will be too small to be perceived. This depends on read noise of the camera and other noise sources. Once read noise is sufficiently small compared to other noise sources - difference in final result becomes small. CCD cameras have larger read noise and thus require longer exposures. CMOS cameras have lower read noise and can use shorter exposures. One can work out rather easily what is best exposure length for their setup (scope / camera / light pollution levels), but even then you don't need to go with that exact exposure to get good results. Sometimes people choose to use shorter exposures for other reasons - like tracking / guiding precision, bad weather and such. If there is wind gust or some other event that causes sub to be ruined - less data is wasted if sub exposure is short (it's better to discard one 2 minute sub rather than 15 minute one - less time wasted).
  21. Yes, field rotation is due to RA axis of scope not being ideally parallel to axis of earth rotation. Over the course of the night, although you don't touch scope and camera - resulting shot will be slightly rotated in relation to first one (there will be progression in rotation angle between each subsequent frame). This is not an issue if angle is too small during the course of single exposure. Software that stacks subs can correct for any rotation between subs. It only becomes a problem if your polar alignment error is big and there is visible rotation of the field during single exposure - stars in corners will star to trail while those in center will still be circular. There is no such thing as less guiding - you either guide or not. But yes you are right, need for guiding depends on exposure length. If you are using short exposures there is less need for guiding, but that depends on the mount. Some mounts don't even need guiding for exposures up to a minute or so. Some mounts show trailing of stars after dozen of so seconds. Better the mount - longer exposure it can handle without guiding. Mount performance is complex topic and there are many aspects of mount performance that are related to guiding. If you are guiding - you should not worry much about peak to peak periodic error if your mount is mechanically smooth - guiding is going to take care of that. On the other hand, mount with small periodic error that is "rough" and has error in position that changes abruptly will be hard to guide with good results.
  22. It works by having second camera / telescope (or attachment to primary scope called OAG - off axis guide) that monitors position of a star (called guide star). If that position changes by small amount - it adjusts pointing of the telescope to compensate. These corrections are done in short time period - order of a few seconds. You still need good polar alignment and all the rest when doing auto guiding as polar alignment error causes more than tracking issues - there is field rotation as well. Polar alignment does not need to be very strict in case of guiding - but it is beneficial to get it right.
  23. I had different conclusion about my ASI1600. Level of bias was not equal in minimal exposure (few milliseconds) vs longer exposure darks. In fact I had higher average sub/stack value in bias then in 15 second darks. Otherwise bias is stable across sessions - two master bias files created at different times will have same average value and will calibrate out normally.
  24. I'm not sure that you can fit meaningful curve to above table. Here is excerpt from Telescope Optics .net piece on field curvature Different scopes have different lens surfaces - there is difference between doublet and triplet, F/ratio will play a part as will focal length. Glass type will also play part (note indices of refraction in expression).
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.