Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

vlaiv

Members
  • Posts

    13,034
  • Joined

  • Last visited

  • Days Won

    11

Everything posted by vlaiv

  1. You have to take into account that same algorithm stacked all three channels into their respective stacks - and therefore aligned subs for each color while stacking. Now - each sub was not taken at the same time and atmospheric impact could be possible between first and last sub. Look at this crop and upsample (nearest neighbor - we can see pixel squares) and animated gif made by "blinking" three channels: We can see distortion that you are talking about - all stars change shape slightly - being round or elongated, but large stars change position, and smaller stars stay in the same place??? How can that be if it is down to optics?
  2. It is fairly simple explanation really and one worth understanding. Noise adds like linearly independent vectors - which really means square root of sum of squares (like finding length of a vector using projection of it in X and Y coordinates - here X and Y are linearly independent - you can't specify X with Y and vice verse). It is also the same as finding hypotenuse in triangle. Here image will help: Longer one side is with respect to the other - smaller the difference between hypotenuse and longer side. It is up to you to define what larger enough means - and based on that you can define how much background signal you want. In this particular case we are putting read noise against LP noise - as two dominant noise sources, or rather - we are letting LP noise become dominant noise source with respect to everything else and observing how much larger it is with respect to read noise. ASI1600 has read noise of about 1.7e at unity gain. My rule is - make LP noise x5 larger than read noise. It is very "strict" rule - you can choose x3 rule which is more relaxed - but let's look how each behave: With x5 rule - we have 1.7e read noise and 8.5e LP noise. LP signal is going to be square root of LP noise, so LP signal is now 72.85e. Since we are at unity gain, but ASI1600 uses 12 bits - we need to multiply that with 16 to get DN - that gives us 1156DN But let's see in percentage how much read noise + LP noise is larger than LP noise alone: sqrt(1.7^2 + (5*1.7)^2) = 1.7 * sqrt( 1+ 5^2) = 1.7 * sqrt(26) = ~ 1.7 * 5.09902 Difference being 5.09902 / 5 = 1.0198039027185569660056448218046, or about 1.98 = ~2% Only two percent of difference between sky noise and sky noise + read noise - hence read noise makes minimal impact Let's do the math for x3 rule: again we have sqrt(1+9) = sqrt(10) = 3.1622776601683793319988935444327 / 3 = 1.0540925533894597773329645148109 = ~5.41% - now we have 5.41% difference between pure LP noise and LP + read noise. Btw x3 rule will give us: (1.7*3)^2 * 16 = ~416DN That is the point behind any particular number - how much difference you want for read noise to make. I usually use 2% difference rule - but you can say, I'm happy with 10% difference - and I'll use rather short exposures.
  3. Do you have any idea how they arrived to that number? It is better to understand why particular number is chosen than to just use number. Not really - that is not over exposure at all. You are right - you have shown areas of the image that are over exposed by using a bit of pixel math - mostly stars. You are also right that it will lead to color distortion in the stars that are saturated. There is very simple procedure to remedy that - one should take just a few short exposures at the end of the session to use as "filler" subs - for those over exposed stars. Since we are dealing with limited full well capacity in any case - at any gain - there will always be some stars that saturate our sensor for given exposure - it is therefore better to just adopt imaging style that overcomes that for any case - few short exposures. Is 5000DN background a bad thing? No, it is not - if subs are ok at that sub duration - overall SNR will improve for same imaging time over short subs. It will be very small improvement over recommended value (I would say 1156DN as a "better" number), but there would still be some improvement. Only when read noise is 0 there is no difference.
  4. Not sure - I could not work with M106 data as it is in PixInsigth format and not fits like first set of data. I did try different alignment process on first set of data and it corrected some of the issues. I checked star R, G and B centroid in two opposite corners and both shown 0.1px error. I'll redo it and post results. Maybe I could do the same for M106 if you post fits? Do you have luminance for these images?
  5. Yes I do. It is about dark calibration (and implicitly flat calibration since it depends on dark calibration). Here is an example: Here I generated what is very close (in signal distribution) dark frame. I used sigma 1 and mean value of 2. Distribution looks ok - it is nice bell shape, but some values are below 0. Camera can't record such values - it uses unsigned numbers as result (photon count is non negative value). Look what happens when I limit value to 0 or above: This raises mean value of dark - as if someone added DC component to the image (and also does some nasty things to distribution and hence stacking of master dark won't work as it should if distribution was unaltered). You won't have that DC component in your lights as signal acts as offset and there won't be histogram clipping. Dark (and bias) component in ligths won't be clipped and there won't be additional DC offset signal. Dark calibration fails for this reason and after dark subtraction your lights don't contain only light signal but also negative DC component that we just saw. Now when you apply flat calibration - you are correcting both light, that is affected by attenuation and DC component that is not affected by attenuation - as it did not come in form of light thru the objective and pixel lens on camera. This makes your flat calibration fail as well. In general - you want to avoid all of the above - and that is the reason you can adjust offset in the first place. Short subs imaging is more susceptible to this than long exposure - because in long exposure, dark current can be enough to overcome this and still put histogram on right side of zero and prevent clipping.
  6. You should if you do full calibration. Darks for planetary / lunar / solar are rather short and so are flat darks. Short darks are almost the same as bias (not enough dark current to raise signal level) - and there is a good chance there would be clipping to the left if you don't adjust offset.
  7. I think this is alignment issue. In fact, I think it is very specific kind of issue that I'll try to explain. First let's see if we can see what is happening in each corner. I took three corners to analyze. In first corner, we have this: This is animation of R, G and B frames, Star positions look like pretty good match, there is no much shift between them, and if we measure centroid on a single star - this is what we get: Star center is the same to first decimal place (roughly - there is difference in 0.1 pixel at most between star centers), so it is very good match. Now let's see what happens in opposite corner: Don't know if you can see this but here there is a bit more "wobble" in star positions between R, G and B. To actually measure it - let's do the same and select star and do centroid: Ok, now we start to see that error in position is no longer 0.1 - it is larger, in fact, between R and G it is about 0.3 in X and almost the same in Y direction - total position being offset by almost 0.5px. Red and Green are closer and Blue is far from Red. Now we need to see the third corner and see what the situation is there like, again, animation: Here I see quite a bit of wobble in vertical direction. Let's again check actual numbers: Interestingly - error in X is again 0.1 but error in Y is again being 0.3. Do we see a pattern here? First corner both axis 0.1 error, second, diagonal corner both errors 0.3, third corner Y error 0.3 and X error 0.1. I wonder if fourth corner will show X error to be 0.3 and Y error to be around 0.1? Interesting - this time error is not 0.3, it is 0.7 and it is in X axis. Y axis has same 0.1 error. This means that Blue channel is more zoomed in, but how can this be? We are dealing here with large sensor and relatively short focal length - wide field image. Two things happen when we have wide field image. Projection distortion starts to creep in. Extreme example of this comes from wide angle lens: Straight lines are no longer straight lines in the image because of this type of distortion. Since FOV is only a few degrees - it is not really visible by eye but can become a problem if there is slight misalignment between images and you try to align them without first correcting for this distortion. Second thing that happens to produce different level of magnification in wider field image is atmospheric distortion. Atmosphere bends light. It particularly bends blue light (shorter wavelengths). This effect is more evident close to horizon than up high towards zenith. If blue channel was shot when target was lower in the sky - then "bottom" part of the frame could be influenced more by this bending of the light - thus creating "zoom" effect for blue channel. I guess that this could be fixed by applying different alignment model - one that also allows scaling rather than pure rigid transform. If wide angle distortion is dominant - it should be corrected (like lens distortion correction). To see if this is viable option - check times when you shot each channel - and see if blue was last channel to be shot on particular night and also was the target nearest to the horizon. If M106 data shows this effect more - it could be that it was closer to the horizon at the time blue channel was shot?
  8. Based on this comparison: https://total3dprinting.org/creality-cr-10-vs-prusa-i3/ I'm still leaning towards Cr 10 V2
  9. Honestly, I have no idea One of the things to consider would be price - up to 500e seems reasonable amount of money. Second thing would be - ability to print with Pet-g. As far as I can tell (limited internet research) - it is sturdy enough material without all that ABS nasty fumes thing. I want to print some parts to be used with my astronomy kit - motor mounts, holders for things, DIY spectrographs and mechanical iris kind of gadgets. Third thing would be availability - will purchase locally because of shipping fees (rather high for bulky items). That means I need to choose from available list of models:
  10. Do yourself a favor and don't go below 1-1.2" regardless of what any particular tool says - all likelihood that you'll be oversampling if you go higher than that. Don't be afraid to bin in software either - it is just a simple operation.
  11. +1 I love my 8" RC, use it with ASI1600 and it is a good match, but you'll have to bin in software a bit, because at 0.48"/px it is over sampling by at least x2.
  12. I recently realized - I want one! I mean 3d printer Creality CR-10 V2, any thoughts on it being first / starter 3d printer?
  13. That one is USB 2.0. That is fine for guiding and DSO imaging but will have somewhat smaller frame rate for planetary / lunar. This does not that results will be bad for planetary - it just means that USB 3.0 would have a bit more potential - again, if that is important to you.
  14. I use my ASI185 with OAG at 1600mm and have no issues finding guide stars. If you want to image other things then you probably want some other camera. 290 is good choice and so is 385. Maybe also look at 178. For planetary - I would go with color camera for simplicity. You don't want to mess with filters and filter wheels just to do a bit of planetary imaging. For DSO (like EEVA) and Lunar only, mono could be better choice.
  15. Only if you also have short exposures of the same target - then you can replace saturated parts with short exposure stack. If something is saturating in long exposure - that means signal is strong there and there will be enough signal in short exposure in those parts to get good SNR. You don't really need many of short exposures - get just a couple of minutes worth of short exposures - like 15x10s or similar to use for saturated areas. I think that deep sky stacker can do that all for you if you use particular stacking method: This was copied from DSS technical page on stacking algorithms found here: http://deepskystacker.free.fr/english/technical.htm#Stacking Other than that, there are at least two other methods for combining short and long exposure subs. 1. Linear combination - requires software where you can do sort of pixel math - PixInsight has this capability and ImageJ as open source alternative. You need to take both stack while linear and aligned and then scale shorter stack to make it compatible (multiply it with ratio of exposure lengths). After that - just do pixel math - if long stack pixel value is larger than some threshold (like 95% - it could be saturated - don't go for 100% as calibration can mess up this if you have vignetting and such and make it smaller than 100%) just simply replace it with scaled pixel value from shorter stack 2. Non linear combination - this one is rather simple - again stack both and align and process each as you normally would - use layers in PS or Gimp to replace saturated parts of image with unsaturated version.
  16. Depends what you need it for? If it is only for guiding - just go with cheapest model. I used QHY5LIIc for guiding (has same sensor as ASI120mc) and it worked fine. Now I guide with ASI185mc - but only because I did some planetary imaging with it and some regular imaging with it at the time (higher pixel count at 1920x1200). I would swap it for ASI385mc at some point - but for the time being it is not priority for me. If you want to do something else with it - then pay attention to other specs that you might need (read noise, QE and frame rate for example).
  17. I'm not sure about your particular model - but best ISO for most DSLRs is about 800 - 1600. ISO is just numerical multiplier and as such it has no impact on SNR. It does have some impact on read noise and best ratio of read noise and pixel well depth tends to be around said ISO figure (each camera has sweet spot - so you might want to look it up for 60d). Using longer exposure - you don't need to reduce ISO nor will it impact detail. If you have saturation areas (it happens with very bright nebulae like M42) - use a few short exposures to blend in saturated parts in post processing (or you can even stack them together in DSS - just use appropriate stacking method - I think entropy based average one deals with different exposure lengths efficiently).
  18. For guiding only - I would not bother, even if using ED guide scope (in fact - it will give tighter stars than regular achromat).
  19. I guide without IR cut filter. IR cut filter is important in few cases - but mostly when imaging. If you are imaging with refractor telescope that does not have good correction in IR (or UV) part of the spectrum - you'll need IR/UV cut filter to avoid star bloat / blur because of unfocused IR/UV light. Reflectors don't have this issue - but you still want IR/UV cut filter when imaging with reflector if you care about proper color balance. For guiding - you simply don't need IR/UV cut filter even if you use refractor guide scope (and most people do). In fact refractor guide scope is likely to be fast achromat that will bloat stars even in visible part of the spectrum (blue halo around bright stars and such) - however, that does not have significant if any impact on guiding. In fact - guiding sometimes benefits from slight de focus of guide star - less seeing impact and that way you avoid saturation on bright stars - you want to guide on bright stars because they provide better SNR - but you want to avoid stars clipping / saturation as it is not good for centroid calculations (exact star position). For guiding - don't bother with IR/UV cut filter, but if you want to do EEVA or image planets with that camera as well - get 1.25" IR/UV cut filter to use for those purposes.
  20. Don't worry about camera being color - I've been guiding with color cameras and never had any issues with them.
  21. How much drift do you get and in what direction? Maybe we could work out something to reduce drift - but still leave it at desirable level? Like, I mentioned - you might not want to remove it completely as it acts as natural dither and that is a good thing. My guess is that main drift is in DEC axis and RA can have periodic error - but that will "circle back" to original location and won't cause great shift between first and last sub.
  22. I don't see why not, but that should not be the feature of "guiding" software - it should be part of imaging software. As far as I know - no software does this - correct position based on star positions in the image (or previous few subs). Regular guiding won't help there, I'm afraid, because it needs to work all the time. Btw - small drift between the frames is a good thing - it is like dithering. Maybe best thing to do would be to leave it have such natural dither and then re frame manually every half an hour or so - just a nudge or two in right direction via scope slew control - while sequence is paused?
  23. Could be that it uses 32bit precision all the time, but look at this: If you select fits output - it will save as 32bit per channel
  24. If you are using AS!3 - it should handle that for you. In fact - I believe there is 32bit "scientific" precision setting somewhere that you can enable if you want floating point precision. I would personally advise all to always use that level of precision - especially when doing lucky / planetary type imaging and stacking thousands of subs.
  25. No, provided that you use required precision. If you use fixed point integer math - you should add your subs. In case you use floating point math - do simple average. (5+6+5+6+5) / 5 = 5.4 and not 5 Btw - with stacking addition and average is the same from SNR perspective - both increase SNR in the same way. "Interpretation wise" - addition is the same as having one long exposure - signal "accumulates over time", while averaging is having more precision in that particular exposure. Simple multiplication with a constant is all that is separating the two.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.