Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

vlaiv

Members
  • Posts

    13,098
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. Hi and welcome to SGL. This is rather interesting question - many are reluctant or can't afford to set aside such a large budget to start with. I think that probably best approach would be: - buy the best mount you can afford - start with a decent starter scope and OSC camera. - maybe throw in guiding as well Decent starter scope and OSC camera won't cost that much (about a third of the budget?) and can be sold easily when you feel ready for upgrade. Good mount will save you trouble at the start and also having to upgrade later. Scope should be up to 600-700mm FL class scope, and camera something like this: https://www.firstlightoptics.com/zwo-cameras/zwo-asi-294mc-pro-usb-30-cooled-colour-camera.html You'll need a laptop with that to capture images. Mount recommendations up until recently would have been iOptron CEM60 (basic version) - but these are no longer being made and are replaced with CEM70 which will be more expensive (not sure if that will be justified or not).
  2. I would avoid x2 barlow with F/10 scope and 2.4um camera - it leads to oversampling, especially in NIR with longer wavelengths.
  3. I have noticed this as well. I think it is more about doing something that makes one happy rather than always focusing on what makes others happy. From time to time, I just start craving that - "this is what I want" fix. It is not necessarily "chase the better" gear kind of thing - it can be "papa's got a brand new toy" thing. It's not impulse buy thing either - a lot of thought and yearning go into it.
  4. Do you have a camera and lens combination that has known QE / attenuation and Gain in terms of e/ADU (although that part you can determine)? Simple incandescent bulb with a diffusing panel could be used than - or flat panel for that matter.
  5. I think that easiest way and probably most precise is to use telescope and camera as calibration source. Procedure is fairly simple - just shoot piece of the sky - preferably away from the milky way with a bright(ish) star in FOV. Star should be bright enough to be in catalog - with known magnitude but not too bright to saturate sensor. Then it is simple matter of doing photometry on that star and measuring background value / dividing with sampling rate so that you get photon count per arc second squared (or just ADU per that unit) - ratio of the two, star ADU count and background ADU count will give you basis for magnitude calculations - take star mag and subtract log of ratio to get sky mag (per arc second squared). Compare that with your device reading in the same region to get one calibration point. Do multiple parts of the sky (different sky brightness values) to get good calibration curve.
  6. I also thought that x2 aperture in mm is some sort of magical number, but it is not. There is actual number based on a bit more science and it will surprise you - for 8" scope it is about x94. Yep, you read it correctly - it is only x94 magnification. Let me explain. Most people with 20/20 vision can resolve 1 arc minute feature (high contrast). Rayleigh criterion (again high contrast) for green light and 200mm aperture is 0.64" - arc seconds. How much do you need to magnify to make something that is 0.64" be 1' large? - that is rather simple 60" / 0.64" = x93.75 - there you go. This basically means - if you use magnification below x94 - you won't see what you can see because you did not magnify enough - you are below resolving power of human eye. Going above x94 - just makes things easier to see. At some point, image just becomes too magnified and blurry - we call that "falling apart" of the image, and it depends on observer, scope and object being observed. I used x500 magnification on my 8" dob and it did not fall apart for me - it was dim and floaters were all over the place, but I could still observe. I prefer now to keep things up to x300 and often at x200. I've got 6.7mm and 11mm 82 degrees ES and 5.5mm 62 degrees ES as higher power eyepieces. I also have x2.7 APM coma correcting barlow - but I noticed that I don't really like to use barlows if I can get wanted magnification with eyepiece alone.
  7. You need to plate solve your image and get coordinates of your star. After that you can search online catalogs like Simbad http://simbad.u-strasbg.fr/simbad/ Alternatively, if you can do a bit of coding, you can use these catalogs: https://archive.stsci.edu/prepds/atlas-refcat2/
  8. New drivers usually require new set of calibration files.
  9. Lovely set of captures. For some reason - bottom of each image seems to have significant blur to it. Do you know the reason?
  10. Yes, of course I just downloaded the first one - it has about 86.5% light level due to vignetting in the corners - so 15% light loss, not that much at all.
  11. That might not be bad as it looks - it really depends on how much you stretch it. Can you post single fits for one filter for inspection so we can see actual values in that flat? For example: These two are the same master flat - but stretched differently - in fact, light loss is only about 20% at extreme corners.
  12. I see similar optical aberrations in all stars in the image per channel. Red is most round. Green shows sagittal astigmatism. Blue shows inward coma like tail. I'm not sure if that is aberration or stacking artifact - maybe stars moved due to atmospheric influence between first and last frame and that tail is sigma clip artifact - or it could be genuine thing. M106 Luminance top right corner: Top left corner: Bottom left corner: Bottom right corner: Same as green in above animation that I made - only significant thing that I see is sagittal astigmatism
  13. Not sure if it does - since all stars show in each channel - I would expect that they have enough signal in each band.
  14. In the meantime ... Ken why don't you try Deep Sky Stacker to see if it will do the same? As far as I know - it has rather "bendy" alignment model (sometimes it can really warp an image if it gets the stars wrong - so I guess it will try its best to compensate for any distortion made by lens or atmosphere).
  15. You have to take into account that same algorithm stacked all three channels into their respective stacks - and therefore aligned subs for each color while stacking. Now - each sub was not taken at the same time and atmospheric impact could be possible between first and last sub. Look at this crop and upsample (nearest neighbor - we can see pixel squares) and animated gif made by "blinking" three channels: We can see distortion that you are talking about - all stars change shape slightly - being round or elongated, but large stars change position, and smaller stars stay in the same place??? How can that be if it is down to optics?
  16. It is fairly simple explanation really and one worth understanding. Noise adds like linearly independent vectors - which really means square root of sum of squares (like finding length of a vector using projection of it in X and Y coordinates - here X and Y are linearly independent - you can't specify X with Y and vice verse). It is also the same as finding hypotenuse in triangle. Here image will help: Longer one side is with respect to the other - smaller the difference between hypotenuse and longer side. It is up to you to define what larger enough means - and based on that you can define how much background signal you want. In this particular case we are putting read noise against LP noise - as two dominant noise sources, or rather - we are letting LP noise become dominant noise source with respect to everything else and observing how much larger it is with respect to read noise. ASI1600 has read noise of about 1.7e at unity gain. My rule is - make LP noise x5 larger than read noise. It is very "strict" rule - you can choose x3 rule which is more relaxed - but let's look how each behave: With x5 rule - we have 1.7e read noise and 8.5e LP noise. LP signal is going to be square root of LP noise, so LP signal is now 72.85e. Since we are at unity gain, but ASI1600 uses 12 bits - we need to multiply that with 16 to get DN - that gives us 1156DN But let's see in percentage how much read noise + LP noise is larger than LP noise alone: sqrt(1.7^2 + (5*1.7)^2) = 1.7 * sqrt( 1+ 5^2) = 1.7 * sqrt(26) = ~ 1.7 * 5.09902 Difference being 5.09902 / 5 = 1.0198039027185569660056448218046, or about 1.98 = ~2% Only two percent of difference between sky noise and sky noise + read noise - hence read noise makes minimal impact Let's do the math for x3 rule: again we have sqrt(1+9) = sqrt(10) = 3.1622776601683793319988935444327 / 3 = 1.0540925533894597773329645148109 = ~5.41% - now we have 5.41% difference between pure LP noise and LP + read noise. Btw x3 rule will give us: (1.7*3)^2 * 16 = ~416DN That is the point behind any particular number - how much difference you want for read noise to make. I usually use 2% difference rule - but you can say, I'm happy with 10% difference - and I'll use rather short exposures.
  17. Do you have any idea how they arrived to that number? It is better to understand why particular number is chosen than to just use number. Not really - that is not over exposure at all. You are right - you have shown areas of the image that are over exposed by using a bit of pixel math - mostly stars. You are also right that it will lead to color distortion in the stars that are saturated. There is very simple procedure to remedy that - one should take just a few short exposures at the end of the session to use as "filler" subs - for those over exposed stars. Since we are dealing with limited full well capacity in any case - at any gain - there will always be some stars that saturate our sensor for given exposure - it is therefore better to just adopt imaging style that overcomes that for any case - few short exposures. Is 5000DN background a bad thing? No, it is not - if subs are ok at that sub duration - overall SNR will improve for same imaging time over short subs. It will be very small improvement over recommended value (I would say 1156DN as a "better" number), but there would still be some improvement. Only when read noise is 0 there is no difference.
  18. Not sure - I could not work with M106 data as it is in PixInsigth format and not fits like first set of data. I did try different alignment process on first set of data and it corrected some of the issues. I checked star R, G and B centroid in two opposite corners and both shown 0.1px error. I'll redo it and post results. Maybe I could do the same for M106 if you post fits? Do you have luminance for these images?
  19. Yes I do. It is about dark calibration (and implicitly flat calibration since it depends on dark calibration). Here is an example: Here I generated what is very close (in signal distribution) dark frame. I used sigma 1 and mean value of 2. Distribution looks ok - it is nice bell shape, but some values are below 0. Camera can't record such values - it uses unsigned numbers as result (photon count is non negative value). Look what happens when I limit value to 0 or above: This raises mean value of dark - as if someone added DC component to the image (and also does some nasty things to distribution and hence stacking of master dark won't work as it should if distribution was unaltered). You won't have that DC component in your lights as signal acts as offset and there won't be histogram clipping. Dark (and bias) component in ligths won't be clipped and there won't be additional DC offset signal. Dark calibration fails for this reason and after dark subtraction your lights don't contain only light signal but also negative DC component that we just saw. Now when you apply flat calibration - you are correcting both light, that is affected by attenuation and DC component that is not affected by attenuation - as it did not come in form of light thru the objective and pixel lens on camera. This makes your flat calibration fail as well. In general - you want to avoid all of the above - and that is the reason you can adjust offset in the first place. Short subs imaging is more susceptible to this than long exposure - because in long exposure, dark current can be enough to overcome this and still put histogram on right side of zero and prevent clipping.
  20. You should if you do full calibration. Darks for planetary / lunar / solar are rather short and so are flat darks. Short darks are almost the same as bias (not enough dark current to raise signal level) - and there is a good chance there would be clipping to the left if you don't adjust offset.
  21. I think this is alignment issue. In fact, I think it is very specific kind of issue that I'll try to explain. First let's see if we can see what is happening in each corner. I took three corners to analyze. In first corner, we have this: This is animation of R, G and B frames, Star positions look like pretty good match, there is no much shift between them, and if we measure centroid on a single star - this is what we get: Star center is the same to first decimal place (roughly - there is difference in 0.1 pixel at most between star centers), so it is very good match. Now let's see what happens in opposite corner: Don't know if you can see this but here there is a bit more "wobble" in star positions between R, G and B. To actually measure it - let's do the same and select star and do centroid: Ok, now we start to see that error in position is no longer 0.1 - it is larger, in fact, between R and G it is about 0.3 in X and almost the same in Y direction - total position being offset by almost 0.5px. Red and Green are closer and Blue is far from Red. Now we need to see the third corner and see what the situation is there like, again, animation: Here I see quite a bit of wobble in vertical direction. Let's again check actual numbers: Interestingly - error in X is again 0.1 but error in Y is again being 0.3. Do we see a pattern here? First corner both axis 0.1 error, second, diagonal corner both errors 0.3, third corner Y error 0.3 and X error 0.1. I wonder if fourth corner will show X error to be 0.3 and Y error to be around 0.1? Interesting - this time error is not 0.3, it is 0.7 and it is in X axis. Y axis has same 0.1 error. This means that Blue channel is more zoomed in, but how can this be? We are dealing here with large sensor and relatively short focal length - wide field image. Two things happen when we have wide field image. Projection distortion starts to creep in. Extreme example of this comes from wide angle lens: Straight lines are no longer straight lines in the image because of this type of distortion. Since FOV is only a few degrees - it is not really visible by eye but can become a problem if there is slight misalignment between images and you try to align them without first correcting for this distortion. Second thing that happens to produce different level of magnification in wider field image is atmospheric distortion. Atmosphere bends light. It particularly bends blue light (shorter wavelengths). This effect is more evident close to horizon than up high towards zenith. If blue channel was shot when target was lower in the sky - then "bottom" part of the frame could be influenced more by this bending of the light - thus creating "zoom" effect for blue channel. I guess that this could be fixed by applying different alignment model - one that also allows scaling rather than pure rigid transform. If wide angle distortion is dominant - it should be corrected (like lens distortion correction). To see if this is viable option - check times when you shot each channel - and see if blue was last channel to be shot on particular night and also was the target nearest to the horizon. If M106 data shows this effect more - it could be that it was closer to the horizon at the time blue channel was shot?
  22. Based on this comparison: https://total3dprinting.org/creality-cr-10-vs-prusa-i3/ I'm still leaning towards Cr 10 V2
  23. Honestly, I have no idea One of the things to consider would be price - up to 500e seems reasonable amount of money. Second thing would be - ability to print with Pet-g. As far as I can tell (limited internet research) - it is sturdy enough material without all that ABS nasty fumes thing. I want to print some parts to be used with my astronomy kit - motor mounts, holders for things, DIY spectrographs and mechanical iris kind of gadgets. Third thing would be availability - will purchase locally because of shipping fees (rather high for bulky items). That means I need to choose from available list of models:
  24. Do yourself a favor and don't go below 1-1.2" regardless of what any particular tool says - all likelihood that you'll be oversampling if you go higher than that. Don't be afraid to bin in software either - it is just a simple operation.
  25. +1 I love my 8" RC, use it with ASI1600 and it is a good match, but you'll have to bin in software a bit, because at 0.48"/px it is over sampling by at least x2.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.