Jump to content

vlaiv

Members
  • Posts

    13,265
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. That is probably so, but any BSI/FSI that has been done is already incorporated into QE - so no need to account for it again - just compare QE and that is all you need to know.
  2. I wonder when we will see first "plastic" lens telescope? We already have eyepieces that don't contain glass but rather some sort of plastic material / polymers, right?
  3. Sensor "illumination type" which is just fancy term for sensor construction does not add additional efficiency. Complete efficiency - due to whatever feature of sensor is all accounted for in QE. Back/front illumination is noting special that modifies QE or sensitivity of the sensor - it is just type of construction. It's like having 6" SCT and 6" Newtonian - and saying - but 6" SCT must have larger aperture because it is SCT. No - aperture size is given for both scopes and is independent of their construction. Back vs front - just means "order" of elements in sensor itself - where photo diode goes and where metal wiring goes - is it front (facing the front of sensor) or back - at the back of the sensor.
  4. Nice detail, but for some reason - I don't like that shadow effect on the left side of the image? It signals some issue with black point during processing perhaps (I might be wrong). Do you apply gamma 2.2 to your linear data?
  5. No. QE is quoted in general - regardless of what is done to sensor to make it so. 91% means as you say - number of recorded photon hits. This relates to photons hitting sensor - regardless if they are detected "in front" or "in the back" of silicon substrate. You shine 100 photons and out of those 91 will get caught by sensor (on average of course, photon statistics and all). With back illuminated sensor technology - it is just easier to get to 91% that is all.
  6. There is no maximum theoretical magnification for any given scope. Don't agree? Stick in shorter FL eyepiece and you'll get higher mag - don't have shorter FL eyepiece - throw in a barlow lens There is just a point after which there is in principle nothing to be gained from magnification, although people tend to use higher magnification than that as it is easier to view. Check out visual acuity: https://en.wikipedia.org/wiki/Visual_acuity Especially this table and MAR column: This says that someone having 20/20 vision is able to resolve down to 1 arc minute angle. They can't resolve smaller angle than that. It also says that actual angle that you can resolve will depend on your visual acuity. If you have 20/30 - then you'll be only able to resolve 1.5' rather than 1', but if you have 20/10 - then you'll be able to resolve 0.5' Now you can take any definition of resolving power of telescope - either Rayleigh criteria or Dawes limit or one related to Airy disk and spatial cutoff frequency - whatever you like and you'll get rather surprisingly small magnification number. Say we take Rayleigh criteria. Let's take 500nm and 100mm of aperture, so we have 1.22 * 0.5µm / 100000µm = ~1.26" For 100mm aperture, Rayleigh criteria says that we should resolve 1.26 arc seconds separated features. Say we are 20/20 vision person - which means we can resolve 1'. That is 60". It only takes 60" / 1.26" = ~ x47.6 or about x48 magnification for us to see all there is to see. Much smaller than what x50 per inch or similar "rule of thumb" criteria say. If we have worse vision than that and can only resolve 1.5' then we might need to use 90" / 1.26" = ~ x72 magnification instead. Thing is - no one really likes to look at detail that is at the edge of our ability to resolve. We like to view it at about x2-x3 times more magnified than bare minimum. It is easier to see and we don't have to strain as much. So there you go. 20/20 vision person will enjoy x100-x150 magnification for 100mm aperture 20/30 vision person will enjoy x150-x200 magnification for 100mm aperture, but someone that has very sharp vision (20/10) will enjoy x50 - x75 for 100mm aperture and image will start looking softer. So you see - no "maximum useful" magnification - only minimum magnification that resolves everything and that depends on our eyesight and then "comfortable" viewing magnification that is usually 2-3 times more - again personal preference.
  7. This is somewhat related to this thread, but I wanted to ask if anyone has any idea why this is so: I don't see uncooled camera version with either 533 color nor mono. These sensors are supposed to be amp glow free and have very uniform and low dark current - that is ideal case for non cooled camera - yet we don't see them? Anyone has any idea why? @Adam J ?
  8. I think you have rather nice M31 there. No color for sure (I had to ditch it as duo band filter is not suited for this type of target), but extracted luminance is nice
  9. Yes, signal is the same - it is SNR in given amount of time that differs. Want same SNR - then image for more time with camera that has lower QE (or change some other parameters of imaging setup - like aperture size or transparency or whatever ).
  10. None of those things matter as all that is controlled by stacking and number of subs and their individual duration. It will produce better SNR - but signal will be the same with both cameras. Signal is say - 10 photons / second / pixel. You can't make a number better. You can only determine it with more precision - say 10 +/- 0.1 photons or 10 +/- 0.001. Second measurement has better precision - or we in imaging say - it has higher SNR. What 91% vs 60% allows you to do is hit your target SNR in less time. That is what I wrote above - ASI2600 is faster camera, and you said you are not interested in faster camera as you can throw more time into image.
  11. I think problem is in understanding of terminology used. Flats don't remove light pollution - they correct influence of optical train on light pollution (as any other light). I never said flats should remove or fix light pollution in any other way then flat correction.
  12. I'm sorry that you feel that way, but assertion is not wrong as all the light entering aperture of telescope is subject to vignetting / attenuation by optical train. Flats correct for that and if sky was not subject to this - sky flats would not work. Have a nice day too.
  13. Flats correct light pollution as well Say you have 0.1% difference in flat due to amp glow - and it's possible given that amp glow can be up to 2e with flat signal being 2000e (12 bit camera, unit gain - ~3/4 histogram + some vignetting). You have 1000e of light pollution which you correct with 80% and 80.1% of amp glow so that is 1000e / 0.8 = 1250 and 1000e / 0.801 = ~1248.44 That is ~1.5e of difference in background signal when you remove background (you can't remove that as it is neither linear nor 2-3 order polynomial). Easily seen when data is stretched.
  14. It actually depends on other factors if it will be significant or not. I've measured amp glow to be 1-2e higher than surrounding signal. ASI1600 is 12 bit camera and people often use unity gain. Flats will therefore contain about 3000e of signal. That is variation of about 1/1500 in intensity if not removed. If one is shooting in very light polluted area - background sky signal can be quite high. You are correcting this background sky signal with variations in flats. It will create some variation in corrected background signal - more signal there is, greater variation will be in absolute value. We end up removing that background - but that variation will remain.
  15. Not really sure what that means. Imagine you have instruments that measures height of something. One gives you fast readout (few milliseconds) and is precise down to 1/10th of millimeter in its reading, while other gives you slower readout - you need to wait 0.2s and is precise down to 1/5mm Will one give you better data then the other? No - data is what it is. In image - it depends on the sky, mount, telescope and all that factors - but camera just records what light gets to sensor. It is not responsible for "quality" of that light hitting it. Both cameras will produce same quality data given enough time (except for micro lensing - which is true artifact of measurement present in Panasonic camera and no in the other).
  16. Once you calibrate the data - well, with both cameras you are left with 1) signal 2) noise Signal is the same in both cameras. If that was different - well, one of the cameras would be seriously flawed Noise is characterized by two things - magnitude and shape. Magnitude is really not that important here, for "quality of the data" - it relates to speed, or how much time you need to spend to lower magnitude of the noise below certain threshold. This leaves us with "shape" of the noise, and hopefully that shape is - random. Panasonic sensor does have a feature that makes noise less than truly random. It has telegraph type noise, but with dithering that is spread around and "drowned" in regular random noise. I haven't seen much other issues with that sensor. For IMX571 - I simply don't know as I haven't handled one yet, but I do believe noise to random enough. What I'm saying above - both sensors are good and capable of producing equal quality data given enough time (Panasonic obviously needing more).
  17. Main difference is in two things: 1. speed of capture 2. absence of microlens artifacts Speed of capture - this comes in two distinct "flavors": 1a - higher QE. We can say that ASI2600 has up to 50% higher QE than ASI1600 In 10 hours with ASI2600 - you'll collect as much signal as in 15h with ASI1600 1b - size of sensor - this is not something that you should really consider because you won't be changing your scopes, but larger sensor is faster sensor because it can be paired with bigger scope. In fact - this is something that you might find interesting with your C11 - when you bin x2 or x3 - you get small pixel count in final image with ASI1600. Bin x3 will give you image that is about 1600x1200 if I'm not mistaken. With ASI2600 - you'll get larger image with same bin factor Second point is self explanatory. You say that speed is not that important - then it is really down to microlens - would you spend your money to get rid of them.
  18. If raw images contain first order - linear gradient, then stack will also contain first order - linear gradient as stack is formed by linear process between subs (weighted average for example is nothing more then linear process - addition with multiplicative constant). What best describes gradient depends on image and of course type of gradient. Wide field images are more likely to have higher order gradients in them as they contain more sky and it is likely the they have "part of light dome" in them. Smaller field of view will be better approximated with linear gradient as even complex surfaces on smaller scales can be represented with linear segment (think - round earth, but flat on scales of few km). How well background can be removed depends also on type of object on the image. Star clusters, galaxies (that take up smaller part of fov) and galaxy clusters that are easy to deal with. Nebulosity that extends through the FOV is much harder to deal with and here if you choose higher order polynomial for interpolation can mess up things more than usual linear. I'd say that rule of the thumb is - use linear unless you have very wide field of view - like more than 5-10°. Then you might need to use higher order approximation.
  19. https://www.firstlightoptics.com/ovl-eyepieces/panaview-2-eyepieces.html good for 120 Evostar, not so much for Dob. In essence same eyepiece - maybe a bit lighter https://www.teleskop-express.de/shop/product_info.php/info/p957_TS-Optics-38-mm--2----70--Wide-Angle-Eyepiece.html Good in both F/6 and F/7.5 - Aero ED clones - no longer available at FLO - but can be sourced elsewhere - 35mm version: https://www.teleskop-express.de/shop/product_info.php/info/p2334_TS-Optics-35-mm-2--UFL-Eyepiece---69--Field-of-View---6-Element-Design.html And 40 mm from Lacerta: https://tavcso.hu/en/product/LA40ed Maybe this one (costing a bit more): https://www.teleskop-express.de/shop/product_info.php/info/p9549_Explore-Scientific-62--LER-Eyepiece-40-mm--argon-purged.html
  20. Well, that sorts out all my problems, thank you!
  21. It seems that I need to use command line and cd command to change working directory to where I want it to be. That is good enough - but not quite intuitive
  22. Ok, excellent - I finally figured out. I have one more question if you don't mind. How do I specify destination folder / directory for where I want files to be located? (btw, I like the way one can use single fits file as a sequence - it helps with file clutter) Destination box won't let me type path like D:\my_folder\test_sequence
  23. Ok, I seem to get it now. Not sure why I have to make a copy of input files instead of adding them (on Windows symlink seems not to be working), but I can create sequence and then when I export things from sequence - it should give me normal values, right?
  24. I can't seem to find it. I just want to do registration step using Siril. I have my own set of calibrated images and I have a way that I'll stack those in the end. All I want to use Siril for the time being is to register images on reference using Lanczos resampling, and when I do that - resulting images are scaled 0-1.
  25. First time I've seen it like that. I saw regular square grid at an angle due to rotation and use of bilinear interpolation, but never this warped
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.