Jump to content

vlaiv

Members
  • Posts

    13,265
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. Only benefit of going with higher gain for this mode is to have lower read noise. I like unity gain for several reasons and main is that it makes it easy to do ADU to e conversion - they are the same with unity gain. Other reasons include no rounding errors introduced. With #1 mode - you have unity gain at gain 0 - or so it seems from the published graphs. Sure it has higher read noise of 3.5e, but that is not much worse than best read noise this camera provides - which is around 1.5e. Read noise should be only considered when selecting exposure length. If sub length is chosen properly - there will be minimal difference between 1.5e read noise and 3.5e read noise stack of same total duration. If you choose sub duration properly for 1.5e - then it is very easy to select proper sub duration for 3.5e - just multiply with their ratio 3.5e / 1.5e = x2.334 - so you need about x2.34 longer subs with 3.5e read noise to get virtually the same result. Given that original image was taken with 2 minute subs - I just multiplied that with 2.5-3 (I did mental math not actual calculation) and ended up with 5-6 minute exposures. Actual exposure length really depends on setup and LP conditions.
  2. I'd use #1 mode (blue line) and gain of 0. With that - I'd use longer exposures, like at least 5-6 minutes and possibly even longer. - Impact of read noise one controls with single exposure length versus local LP - Full well capacity and dynamic range are simply not important when stacking. If you fear that you'll loose star cores due to saturation - do several short exposures that you'll blend in later when stacking to replace saturated pixels.
  3. In the meantime: Even this prestretched 16bit version has a lot to offer ...
  4. Is this data already stretched? How about posting linear data instead? Also - use 32bit float point precision data instead of 16bit, especially if you have 16bit camera and stacked 100+ subs.
  5. Although you decided to go with another unit - maybe some will find this valuable. Two seem to be a good match - here is spot diagram of such setup: https://astronomyplus.com/wp-content/uploads/2019/09/APM-Riccardi-0.75x-M82-ReducerApo-115-f7-Data-Sheet.pdf At 21mm away from optical axis, vignetting is 3% and RMS spot is 7µm. Airy disk diameter is 8.7µm
  6. I don't think that is a lot. If you think about it - even if there is like 100 hot pixels in that image, so about 400 total - your camera has about 20 million of them, and 400 out of 20,000,000 is not a big percentage - it is something like 0.002% . I'd call that small percentage.
  7. I don't think they quite are. Imagine following scenario, you have two eyepieces. You look thru one eyepiece with your left eye and thru other eyepiece with your right eye and let your brain try to merge image. Image consists out of full moon in the center of the field and surrounding sky up to field stop (sky is bright enough against field stop so that you can easily distinguish it in each eye). If focal lengths are the same - then moon will overlap perfectly from both eyes - there will be no double image of it. If AFOV is the same - then field stops will align perfectly - there will be no double image. You can have the same moon image and different surrounding sky images and you can have different moon sizes for same sky/field stop image. There are third and fourth cases - when both are the same and both are different. This shows that AFOV and magnification are not effectively the same thing. (you don't need the moon to compare AFOVs of two different eyepieces - you just need blank well lit wall - hold two eyepieces - each against one eye and let brain try to merge the image - if you have trouble and can't align field stops - that means AFOVs are different - and in fact, you can judge which one is larger by favoring each eye in turn). This is the reason I asked in the first place about percent of distortion. Televue has same formulae about relationship of AFOV, field stop and focal length on one of their pages: https://www.televue.com/engine/TV3b_page.asp?id=113 Now, as you mentioned: If that eyepiece has zero AMD, then we need to use and if we use that formula on 24mm FL and 27mm field stop we get: beta = 27 / 24 = 1.125 radians = 64.46° But if we do something else - and assume that AFOV is indeed ~68° then we can do following: 68° = 27 / actual_fl => actual_fl = 27mm / 68° = ~22.75 So maybe Panoptic 24mm is actually 22.75mm FL eyepiece. Or maybe there is some middle ground and FL is something like 23.4mm and AFOV is from there 66° and for marketing purposes it is declared as 24mm 68° to be in line with rest of Panoptic EPs. In the end - maybe field stop is not precisely 27mm but a bit more?
  8. You have rather small sensor - only 11mm diagonal. Most scopes don't need correction over such small area - just 5.5mm away from optical axis (well, maybe fast newtonians do, but slower ones like F/6 and higher would not need coma corrector either). With such small sensor you could look into some serious reduction - that would be almost like having smaller scope. For example this: https://www.teleskop-express.de/shop/product_info.php/info/p7943_Long-Perng-2--0-6x-Reducer-and-Corrector-for-APO-Refractor-Telescopes.html Now, I would not recommend that as a working solution - but rather as an experiment. That is quite strong reduction and only reason to believe it would work is due to size of your sensor. Although specs on it say it can work up to APS-C sized sensor - most people found that acceptable results are with cameras like ASI183 and ASI533 - which are both 1" sized sensors (of ~16mm diagonal which is nowhere near 1" but you know - "standards" ). Your camera is 2/3" so that should work better as it is smaller.
  9. Yes, that is an option - you can get good focal reducer. It will offer wider FOV as well. Depend what you want - how larger FOV. Good thing about making mosaics is that you don't need to invest into new equipment. You have all that you need, and you can try it out "free of charge". Maybe you'll think it's too much work, or maybe you'll lack software support for it and you decide to go with smaller scope or larger camera, but you won't know if you don't try, right?
  10. If you want to match 325mm experience on your current scope, then you need to bin your current scope x2 to match focal length / working resolution of smaller scope. It is also needed to match the acquisition speed of smaller scope. With same camera (same pixel size) and same F/ratio - "speed" is the same. With small scope you would use whole 4h on FOV, but with larger scope - you would use only 1h effectively over whole FOV - because you would use 1h for each panel and panels don't add up in time (not the same data) - only in FOV. If you don't bin, then doing mosaic will result in: - larger resolution image - 4 panels 1940 x 1460 or image would be roughly 3880 x 2920 - but 1h of effective exposure When you bin x2 - what you do is: - match resolution / pixel count of smaller scope used with your camera and end up with 1940 x 1460 - match SNR / effective exposure because bin x2 improves SNR x2 - same as stacking x4 data - or 4h instead of just 1h per panel Makes sense?
  11. No it does not. You do need hot pixel map or dithering and sigma rejection. Could get away with good cosmetic correction algorithm, depending on your working resolution and star FWHM (algorithm needs to distinguish stars from hot pixels).
  12. I think it does apply, but you can check it out. Take one bias sub and one dark, say 60s long. Measure their mean ADU value and if they are the same - say around 2048, or it can even happen that bias has mean ADU value slightly higher than dark - then it probably applies. Dark needs to have higher mean ADU value because dark current accumulates and adds to pixel values. If that is not the case - then dark current has probably been removed already.
  13. How can we be sure that all additional AFOV is due to distortion and not perhaps change in focal length?
  14. Even Jpeg version can be somewhat improved by binning. This is binned x6 and then processed:
  15. You had it set to Jpeg only? No raw? That ring is some sort of reflection from a bright star. Not sure what caused it, but it sometimes happens.
  16. Yes, offset is related to readout of signal - it does not matter how long it was exposed. As long as you have it above zero with bias - anything else will just add signal and raise it more above 0 - either dark current in darks or target / LP signal in lights.
  17. Not all do. https://www.firstlightoptics.com/adapters/ts-ultra-short-t2-adapter-for-nikon-dslr-1mm-length.html
  18. That looks very nice. It just shows that one can image with "slow" scope as well. Would you be so kind to share original raw from the camera? I'd like to see how far I could push it in processing using some tricks (like binning the data and so on).
  19. You can get that combination: https://www.firstlightoptics.com/reflectors/skywatcher-explorer-130p-ds-ota.html + https://www.firstlightoptics.com/coma-correctors/skywatcher-coma-corrector.html + https://www.firstlightoptics.com/adapters/astro-essentials-m48-camera-adapter.html (Nikon version) Provided that @FLO confirms that Astro Essentials M48 camera adapter for Nikon F mount has 8.5mm of optical path, because Skywatcher x0.9 CC has 55mm backfocus requirement and M48 thread That setup will work.
  20. Doing mosaics with your current equipment is most cost effective way to get wide field images. It just looks like doing mosaics will take a lot of time - but that is not really the case. Your camera has 1940 x 1460 pixels and your scope has 650mm of FL. If you want image that is the same as using your current camera on 325mm FL scope - it will take the same mount of time to do mosaic as would using F/6 scope of 325mm with your current camera. Here is a bit of math that explains it, so bare with me just a bit - it's not hard to see why above is true. If you get smaller scope - one that has 325mm of FL and is F/6.2 like your current scope - it will let in just 1/4 of light due to aperture size. 325mm FL F/6.2 scope will have ~52.5mm of aperture - which is half of 105 by diameter of 1/4 by area. 4h of using such scope will be as 1h of using current scope (given same working resolution). But scopes don't have same resolution as current scope has twice the focal length. In order to get same resolution, you need to bin x2 current scope. There is your recipe. If you would image target for say 4h on a given night with smaller scope, you then create image consisting of 4 panels, you spend 1/4 of time on each of those panels but you shoot each of panels with bin x2. Each of panels will have 970 x 730 px because of bin x2, but when you stitch them back together final image will again be 1940 x 1460 and FOV will be twice as big - it will be exactly like if you used current camera on smaller scope. Given that smaller scope needs x4 more exposure time on same resolution and that you would image with it for full 4h and you image only 1h per panel with your current scope - SNR will be the same as well as 4h * 1/4 = 1h. Only difference is loss of few dozen of pixels for joining the mosaic - like 5-10% smaller FOV, but that is the "price" you pay for not spending any money on wide field setup You can have 217mm FL scope as well in your "arsenal" this way - just make 3x3 mosaic, bin each sub x3 and image each panel for 1/9th of total imaging time you would otherwise spend using smaller scope.
  21. In order to figure out best offset setting for given gain - you only need bias exposures. Procedure is rather simple: 1. set some offset 2. shoot a number of bias subs with camera (say 16) 3. stack those subs using minimum method 4. examine statistics of resulting stack. If minimum pixel value is larger than 0 (or whatever minimum value for camera is) - you are done, if not, raise offset and go to step 1. Visually, histogram can be "glued" to left side - but that is just visual thing, you really need statistics in order to know values. If you have 0-65555 levels in pixel value and histogram on screen is say 300-400px wide, how can you tell if histogram is at pixel value 56 or 72 or is at 0? Visual histogram does not have enough resolution to provide feedback for that. Btw, setting offset a bit "too high" is really not a big deal - it won't affect your image at all in the end. It will reduce your full well capacity a bit - but that is not big deal, you'll need to take short exposures anyway if you want to capture star cores that are usually blown in regular exposures.
  22. I found the same results on my Canon 750d and after a bit of research - it turn out that Canon does "internal" dark subtraction of sorts and then adds 2048 constant offset. Internal dark subtraction, as far as I understood, works like this. There are some 30 or so pixels in each row on the edge of sensor that are masked off - covered with something, so that they can't register light but are "exposed" together with rest of the sensor and in fact gather dark current during regular exposure. For each row these are measured and their mean value is then subtracted from all other pixels in the row (ones that have been properly exposed). This sort of removes dark current per row. Good thing about it is that it removes temperature dependence. Bad thing about it is that it does not remove bias variation signal and it introduces small level "per row" noise in each sub. Each of those pixels has some read noise and "stacking" them reduces that read noise about x5 times, so it is very small amount of noise - but it is embedded equally on whole row - that creates very visually recognizable pattern. Here is animation of several dark subs I took with my DSLR - stretched: These have been scaled and stretched of course, but animation shows how there are some features that are the same between each dark sub - that is bias signal in each that is not removed, and one interesting feature is also visible - that is horizontal banding that is result of this internal dark calibration - each sub has different per line random offset that is visible as dark or light band for each row. Here is master dark created out of these darks. This is actually master bias signal as it does not contain dark current - that has been removed by internal calibration. As comparison - this is stretched master bias for same camera: stretched a bit harder as I did not pay attention to level of stretch, but you can see that two are in fact the same - they contain bias signal. From this, we can see that often given advice is very sound and is in fact best calibration procedure for Canon DSLR-s (maybe others as well): 1. Don't use darks or attempt dark scaling - it won't work as dark current has already been removed 2. Calibrate with master bias as both darks and flat darks (don't just remove constant of 2048 if you want best results - since there is actual bias signal / signature that you want removed instead showing in your image) 3. Dither to additionally spread that horizontal line issue from internal calibration
  23. I completely agree with you. OP needs T2 adapter for Nikon F that has declared 8.5mm of optical length. I only pointed out that I was not able to find such T2 adapter. One that has specified optical length - has 1mm of optical length. Other one that I was able to find - does not have any specification on optical length. Neither does one that you linked on AliExpress. I honestly don't want to recommend something that at least does not have proper specifications (it might as well have 8.5mm of optical length - but how do we know unless someone using the item comes forward and says that it is so).
  24. Yes, it makes sense for T mount as T mount has 55mm flange distance, but this is M48 connection Baader MPCC has ~58mm with M48 connection and some other CCs have larger distance with M48
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.