Jump to content

vlaiv

Members
  • Posts

    13,265
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. Are you sure it is 1 minute exactly? There is no significant period of 1 minute long in Heq5. These are significant periods for HEQ5. Usually 3rd and higher harmonics have minimal contribution
  2. I just thought of even simpler way to calculate atmospheric refraction effects You don't need two stars at all - you just need one bright star that passes thru zenith and good stopwatch We know sidereal rate quite precisely. We can measure where such star sets at horizon. From those two points we can calculate angular length of arc that star covers and then we can divide it with 1/4 of sidereal day to get "speed" at which star is supposed to move across the sky. When star is at zenith with start our stopwatch and then take that star position measurement in regular time intervals. It motion should be uniform, but it will in fact "slow" as it is approaching horizon. If we plot this curve - we will get actual / measured position graph that we can use to derive how apparent position varies with altitude.
  3. At any given place of the earth - zenith plots "a circle" on celestial sphere as earth rotates in 24 hours. Depending on how precisely you define zenith - there will be limited "supply" of stars of sufficient brightness that lie on that circle. You need to be able to identify your star as it drifts thru field of view of stationary telescope - for that reason it is good for it to be fairly bright. We know that there is no shift in position for stars that are directly at zenith. You can then take two measurements: 1. Position of some star when our reference star is at zenith 2. Position of same star when our reference star is somewhere else, together with position or reference star at that moment. In first measurement - reference star will be at exact point, and other star will be shifted due to refraction of the atmosphere In second measurement - both will be shifted due to refraction. You can calculate their angular distance (subtract their positions in spherical coordinate system). With many such measurements you can plot curve of their distances vs reference star position - which you can then use to derive shift based on altitude of the star. Fact that you have one point where you know that curve of distances is anchored helps you get absolute values rather than just shape of the curve. By the way - apparent position can be quite a bit different than actual position:
  4. That is excellent - never would have thought of that. Good to know that one can dither manually as well. I guess it is a bit of a chore but will certainly improve results.
  5. I don't think it does. If you are attached to a platform that perfectly tracks - it will not matter which direction you are looking at - you will compensate earth's rotation perfectly. Cone error is related to telescope not pointing where you want it to point - and not RA being pointed to Polaris or tracking being at perfect rate. Simple star trackers work like that - they just track the sky - and don't care where your DSLR is pointing. You can have ball head attached to it and DSLR sitting on top of ball head pointing in arbitrary direction (which is equivalent to huge cone error).
  6. Maybe something strange is happening in DSS? Try just stacking one session with one set of calibration frames to see if you'll get same issue.
  7. I see what you mean, and I don't really know what could be causing that. Could you include master flat for top image (assuming it is single session) for comparison?
  8. Image would be helpful. "bevel" type of issue happens if dust particle moves. Sometimes it can happen if camera rotates - but that will affect dust shadows differently across the frame. If single dust doughnut is affected - that dust particle moved. If whole frame is affected - then something strange happened - as if whole flat frame moved a bit (maybe it was somehow aligned in stacking - no alignment is needed for flats). Under/over correction happens due to improper calibration or light leak of sorts (same image of dust doughnut but brighter or darker). Change of focus will cause outline type of error - doughnut will be calibrated but its edges will be beveled.
  9. Mount performance won't depend on focal length used, nor is polar alignment only source of the error. Main source of the error with a mount like HEQ5 is periodic error. There is a lot of sample to sample variability in how much periodic error there is for any given HEQ5 specimen. It is not uncommon to have 30-45" Peak to peak error (that error happens over about 10 minutes, that is whole cycle - so half of it is about 5 minutes - and that is "one way" 30-45" drift - you can easily have 6-9" per minute of drift) Depending on your working resolution (that is combination of focal length and pixel size / with any binning accounted for) - you will see more or less elongation in your subs depending on your exposure length. Even with short focal length scope periodic error can show easily if we use camera with small pixels (which are popular/dominant now). I agree that sensible exposure length is about 30s to a minute before you start to see serious trailing. Bottom line - start unguided. Higher in declination the target is - you'll be able to do longer exposures without trailing (if you see large trailing - choose high DEC target until you get guider setup). If you see trailing - don't panic, that is sort of normal for mounts like that if you don't guide. It is best to orient sensor so that you know direction of RA and DEC in the image (most people orient RA to be horizontal). This way you will be able to tell if your polar alignment is not the best or if it is down to periodic error. Drift in DEC is due to poor polar alignment, drift in RA is due to periodic error. One thing that might happen to your images is walking noise. This sometimes happens due to slow frame to frame drift (either in RA or DEC), and solution is to dither which requires guiding. Again - don't be alarmed by this.
  10. I guess it is just a convention that is still used. They don't actually try to fool anyone - for example ASI labels their sensors like this: It gives exact measurement in millimeters as well as "inch class" - it should really be thought of as "class" of sensor rather than actual size. We also say for example 5" telescope for 120mm to 130mm apertures although strictly speaking 5" telescope is precisely 127mm (some might be precise by calling 120mm - 4.7" instead ).
  11. https://en.wikipedia.org/wiki/Optical_format 1/1.2" means that sensor is roughly 1.2 smaller than 1" sensor. Problem is that 1" sensor does not correspond to anything physically 1" long on that sensor. Above wiki article explains that it has to do with early days of television and diameter of tube used in camera rather than sensor itself. It also explains that you get rough size of sensor if you multiply with 2/3 (or divide with 3/2). 1/1.2" sensor is therefore roughly (1" * 2/3) / 1.2 = (25.4 * 2/3) / 1.2 = 14.11mm diagonal 11,1 x 6.2 give diagonal of 12.71mm As you see - it is only a rough "class" of sensor - rather than precise measurement. For another example, both ASI533 and ASI183 are classed as 1" sensors and should have 25.4 * 2/3 = 16.9333mm diagonal. First has 11.31 x 11.31 = 16mm while second has 13.2 x 8.8 = 15.86mm
  12. Depends on filter. I remember that Baader had UHC filter that had issues with IR leak. They released new / fixed version later on. First clue should be if filter is classified as visual UHC or photographic UHC. We can't see IR part of spectrum so there is no harm in visual UHC filter passing some of the IR spectrum.
  13. I was just about to say the same. Every new image is better than previous. This latest is very very good.
  14. I use a lot of subs. There is no rule of thumb really - same thing applies to darks and bias as it does to lights - noise goes down by square root of stacked subs. You inject back this noise into every light you calibrate so it is worth spending some more time to minimize this noise you put back in. Good thing is that you don't have to shoot all your subs at one time. You can start one day and do 4h of darks. You can add next batch of darks in few days when you have the time. I usually just leave it in basement where temperature is lower so it is easier to get to target temperature for 4h at a time while I'm doing something else. I've used up to 256 of subs in my stacks (256 is "default" for flats and flat darks). I start with 64 dark subs and build from there. By the way, if you don't scale darks and don't optimize them - then bias are really waste of time. You only need flats, flat darks and darks for complete / proper calibration.
  15. So you took two very short subs and subtracted them and measured standard deviation of result and divided that with 1.414, right? That should be your read noise. Is ASI533 14 bit camera? You need to divide result with 4 in that case (you should really divide each sub with 4 before you start - that is 2^(16-camera_bits) part). Math involved allows to divide final result as well (this particular case allows but it is not general rule - best to start with subs that are first divided by 4). 8.982 / 1.414 = 6.352 6.352 / 4 = 1.588 That is almost 1.5 (actually closer to 1.6, but that graph is hard to read).
  16. I made above measurement on ASI1600 - got 62 from dark with 64 offset, so I think multiplier is 1 for that camera (if there is indeed integer multiplier for each camera).
  17. Yes it does look a bit greenish, I've noticed that too, but data is not color calibrated and I worked with like it was. I just performed color scaling on B-V 0.46 star. Any slight color cast is due to missing proper color calibration step (not just scaling but 3x3 matrix multiplication) Stellarium reports that star to be B-V of 0.26 According to this formula: https://en.wikipedia.org/wiki/Color_index It has ~7726K temperature. Similarly Gaia DR2 says it has 7448.00K temperature. In any case, colors corresponding to to 7450 and 7725 are 236, 237, 255 and 231, 235, 255 respectively (source http://www.brucelindbloom.com/index.html?ColorCalculator.html) Magnified star from my image: and RGB colors obtained by temperature for comparison: So yes - tiny bit towards the green, but overall color is good- quite close to what it is supposed to be. Note that 7700K star is really not that blue as people make it to be. You need 20000K+ to really start to see bluish color - and even then it is always grey-blue never deep or ocean blue.
  18. I probably was not clear enough in what I said or rather - I did not say everything I meant. I have completely new workflow that I'm using currently and I was thinking in terms of it. I will describe it briefly so you can see that what I said makes sense. My current workflow involves working in XYZ color space rather than RGB as XYZ gives nice feedback on brightness as Y component is in effect luminance information. Steps would be as follows: 1. RAW_RGB -> XYZ color space (matrix multiplication, matrix is derived from measurements with calibrated device or of calibrated source - for example using DSLR to shoot tablet displaying number of color patches or using calibrated computer screen, in first case we have calibrated device in second calibrated source) 2. Y is extracted as mono data and saved 3. Saved Y is stretched with visual feedback (say we open it in PS/Gimp and we stretch it using levels or curves), or some alternative used like DDP or whatever one likes. At this stage noise reduction and sharpening can also be performed 4. We save stretched Y (lets call it st_Y from now on) 5. We perform inverse sRGB gamma on st_Y to move it back in linear space 6. For each pixel we compute st_Y/Y value - this will be some coefficient c 7. We multiply original XYZ with c - this is what I meant by changing exposure time on pixel level - multiplying with a constant is the same as exposing for c longer/shorter (depending if c is larger or smaller than 1 - it is usually larger) 8. With new XYZ values we perform standard XYZ -> sRGB transform to get final image (standard matrix transform defined by sRGB standard + sRGB defined gamma). XYZ should be viewed as standardized sensor used for true color processing, much like there are standardized filter responses used in photometry (like UGRIZ / UBVRI) with benefit of Y being luminance (which makes it convenient for processing and also handy to easily mix in L from LRGB - we would perform stretch on L and derive XYZ from LRGB and discard Y from it and replace it with L, or it opens up whole new ways of imaging like LRG - as you only need three components to derive transform matrix to XYZ color space) Only trick is to use appropriate matrix to transform RAW camera values to XYZ, from then on - you don't really care which camera was used to record the data, workflow is the same.
  19. I would not count on offset number to correspond to electrons of offset. You should set offset high enough so you don't get clipping (in fact - I advocate you don't get any pixel with lowest value in dozen or so consecutive subs - but I'm probably over doing it ). After that basically forget about it - use bias average ADU value as baseline - or simply use calibrated subs.
  20. In principle, offset should correspond to bias level. It should be offset in electrons prior to any gain is applied, but I've found that it is not reliable. It depends on driver implementation and it is not always correct. I've just pulled one of my 60 second darks. I set my offset to 64 - yet, average ADU of dark sub is 62.81ADU This is with dark current (which is not much - less than 1e total at 60s at -20C, but still - value should be higher than 64 not lower). You can always see how it corresponds to bias levels for your camera. Set offset to some value, take bias sub, load it in ImageJ - divide with 16 and then multiply with e/ADU value if you are using gain different than unity - and see what average pixel value you get. In theory it should be the same as offset you set - but it might not be.
  21. If you match the resolution between two scopes - then speed is ratio of aperture surfaces. 10" scope will be x4 as fast as 5" scope (minus central obstruction and mirror reflectivity) by having x2 aperture by diameter and that makes it x4 aperture by surface. There is x4 more photons captured in unit time. Or in another words - if you image with 10" scope for 1h - you will need to image 4h with 5" scope to match SNR (because in one hour it will capture x4 more photons - same as imaging for x4 more time with base aperture). This is irrespective of F/ratio. It plays no part in this equation (it is already included in equation as it is just ratio of aperture to focal length and we have aperture and focal length is included via resolution / sampling rate).
  22. Just to be different from the crowd - I'm going to say ASI385 Same QE and low read noise as ASI224 - but larger sensor which is a bonus for the Moon. It will take less panels to cover lunar disk.
  23. If you are interested in spectroscopy side of things - it is pretty "easy". It is best to treat pixels as having double the size - but you will have lower QE as you spread spectrum more. After you debayer your image (in above case it is best to split debayer) - you will have three channels - and all you need to do is "stack" them using max - highest of the three value. This will produce "mono" sensor with somewhat strange QE curve - but you don't really care about that as you will remove instrument response from your data (by capturing known spectrum). By using max - you will get sensor with above outlined response curve (it will have response of the highest component for each wavelength).
  24. If you followed all the same steps - and have different values to mine - then one of us have it wrong (or both of us, but one thing is certain - we can't be both right ). Did you do procedure on same data? While LP changes during the night, I doubt that LP levels can double.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.