Jump to content

vlaiv

Members
  • Posts

    13,265
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. For planetary imaging you want camera with highest QE and lowest read noise. Pixel size is not that important as you can actually vary amplification given by barlow - by changing the distance from barlow element to sensor - more distance, greater amplification. You say you have F/5 scope and x2 barlow. This gives F/10 setup. 2.4µm pixel camera needs ~F/9.4 2.9µm pixel camera needs ~F/11.4 3.75µm pixel camera needs ~F/14.7 They are really all quite close. For 3.75µm pixel camera you should really get x3 barlow, but other two are quite close and placing element closer or further away will get you there. Mind you - with regular barlow it is easier to get higher magnification as it is easier to move sensor further away - just pull it out a bit or add small extension. If you have barlow with removable barlow element - then it is just the matter of getting required distance with extenders. As far as guiding goes - you'll have no trouble guiding with color camera. I do it all the time and it just works. Maybe mono cameras have slight edge there - but I've used my ASI185 with OAG and haven't had issues with it. If you plan to use OAG - then larger chip would be waste of chip size - because neither do planets need large chip - nor OAG can illuminate one. OAG prism is 8mm and depending on distance, full illumination can be as small as 3-4mm. If you plan on imaging the moon and to use guide scope - then yes, look into larger sensor, but be aware - your F/5 newtonian won't have very large coma free field. On F/5 scope, coma free zone is ~2.8mm in diameter. If you add x2 barlow to that, you get 5.6mm diameter. All sensor in the table above have larger diagonal. Larger sensor will help with guiding as it will allow you to pick more stars (multi star guiding with PHD2 and finding suitable guide star in general). ASI224 or ASI485 would be my choice, with ASI485 as preference as it opens up more options (like larger FOV with scope that is capable of it or some DSO imaging with fast lens as it has bigger sensor size). With ASI224 - consider that you might need x3 barlow if your current one proves not too good at x3 amplification.
  2. @Ags Just ran into this topic. It's been some time since you posted. Have you managed to do something with thits? Found rather interesting single hand blue tooth version of gamepad. It is meant for VR glasses, but from what I've gathered - it works as BT controller in general: https://www.aliexpress.com/item/1005003237891833.html?spm=a2g0o.search0304.0.0.42191afemJoSb5&algo_pvid=9bba19c3-a562-4dae-8b75-272969f787cc&algo_exp_id=9bba19c3-a562-4dae-8b75-272969f787cc-14 From what I've gathered - it should not be too difficult to write an app that takes input from one of these and connects to SynScan app on Android (iOS is a bit more difficult as two apps can't communicate via loopback adapter) to control it. I'm going to give it a go in making such app unless you managed to make it work somehow?
  3. Oh, I love this Found another, more serious measurement of M51: Central part is up to Mag24.6, then brighter part of tidal tails is 24.6-26.5 and outer parts are fainter than 26.5 (marked on the image as three different zones of black-red-yellow-white gradients).
  4. Sure. Here it is: SNRCalc-english.ods Use LibreOffice to view it (maybe Excel works as well? Haven't tried it). There are two different calculations usually performed, and I'm not sure which one you are interested in. I'll briefly go over both and point out important bits. 1. Exposure length needed to swamp read noise 2. Total exposure needed to reach SNR (this is what above spreadsheet roughly calculates). Let's explain first one. Only difference between one long exposure and stack of shorter exposures that add up to same total time in terms of SNR is read noise. Or more precisely - how read noise relates to other noise sources in the image. Other noise sources are - dark current noise, LP noise and target shot noise. Usually dark current noise and target shot noise are quite low (cooled cameras and faint targets), but can be significant if target is rather bright (you don't really care in this case as SNR will be good with bright target) and if you are using DSLR camera as it does not have cooling. LP can also be significant and is major factor most of the time. In any case - you want your read noise to be 3 to 5 lower than any other noise in your sub. Both LP noise and dark current noise depend on respective signal - LP signal and dark current signal. Noise is equal to square root of signal (in electrons). Here is an example of how to calculate for dark current. Your camera has 2.5e of read noise and Dark Current of 0.5e/s/px. 2.5e x 3 = 7.5 We want our dark current noise to be 7.5e or larger. This corresponds to dark current signal of 56.25e (square of noise). Dark current builds up 0.5e every second so we need 112.5s for it to build up to 56.25e Based on dark current - I'd expose for max of 2 minutes per single exposure. Similar thing goes for LP signal, but you need to either calculate it - using above spreadsheet - just input SQM value for your sky (can be measured with SQM meter or found online from lightpollutionmap.info or similar resource), or directly measure it from your images. Later is more precise but requires you to know e/ADU for particular ISO setting. You simply take your image and measure background ADU units in single exposure (in channel of interest) and convert to electrons by multiplying with e/ADU - then take square root and you have your background noise. Just make sure your sub is properly calibrated (with darks to remove dark current signal). Now on to second point / calculation Time to reach target SNR. Very important point to understand is that there is no single SNR value in the image. There is no single SNR value for the target either - every pixel has its own SNR value. When we say - time to reach SNR - we need to make sure we take representative signal strength for the image we want to make. You mentioned M27 and NGC7000 and gave mag7.5 and mag4 values. Those mag values are integrated magnitude values - total amount of light. We want surface brightness. Simple conversion would be to divide total light with surface of object - but that does not give you accurate results. That technique assumes that you have uniform distribution of light - that is never the case. When talking about surface brightness - you need to estimate it - or have it measured precisely. I'll give you an example how things differ with respect to this. Let's take "A Justifiable Replacement for M51 Whirlpool Galaxy" as an example (just saying what it says 😞 It says Mag 8.1 as total brightness. Then it says 21.45 as surface brightness. Second one is derived from the first one by dividing with rough size of object. Is that good estimate? Not at all. Galaxies have really bright cores - in this case brighter than mag15 and depending on type of galaxy - brightness can fall of as fourth power of distance from the center. That means that outer parts of the galaxy are very very faint. I'll give you two different measurements of M51 brightness: This was taken by Roger Clark - and it gives sqm in red, green and blue. It goes down to just sqm23 due to SNR of the image - and it only reaches spiral arms and not outer parts of galaxy. This one was actually done by me. You need sqm at least 24 for spiral parts and down to sqm26-27 if you hope to capture tidal tail of the galaxy. This is luminance brightness by the way. So you see - you can falsely believe that Mag 21.45 is brightness that you should use - but in reality, you want to use Mag27 as your guide when imaging M51 - if you want to capture tidal tails and mag 24 if only wanting spiral structure. Using integrated brightness - and dividing with surface will give you wrong results - as you've seen yourself. Another important point is - what SNR you want to achieve. Aim at SNR >=5 if you want detail (like in spiral arms) and SNR>=3 if you just want to capture nebulosity without much detail.
  5. That won't do anything useful. Flat field should be the same always - regardless of flat source used - as it is used to correct system response. We don't align flat frames - so trying to dither them won't make much sense.
  6. Definitively try using ASCOM. I've used native at some point and found inconsistencies with offsets and things like that - that is why I asked about used driver.
  7. I guess it is just hot pixel of sorts. In theory, using matching flat dark instead of bias should remove hot pixel - unless it saturates - then it will be just 0 (max_value - max_value = 0). Using any sorts of sigma / percentile rejection will not work on hot pixels for flats. It works on lights - only if you dither. That way hot pixel will move around (it will stay the same with respect to sensor, but will move around once you star align the lights). When hot pixel moves around - it get stacked against other normal pixels and percentile rejection sees odd one and removes it. When you stack flats - hot pixel is just in the same spot every time. There is no odd pixel in the stack - as all are hot in that place - and algorithm can't tell the difference if it's normal or abnormal. You can use hot pixel map to remove such pixels from flats - but if you dither and use sigma / percentile rejection on your lights - then you don't really need to bother. Those pixels will be wrongly calibrated - but they will be rejected from the stack because of their weird value (or at least they should be rejected if all is good - dithered and all).
  8. That is ok. That looks as it should for CMOS darks that have amp glow. Some residual horizontal banding (that averages out with multiple darks) and some noise where amp glow is subtracted. This is very strange by the way. I don't think it is light leak. Dark looks like proper dark without any added signal - except that it has very high offset. Fits header shows same settings for it and light. What driver did you use to capture these images? Native or ASCOM?
  9. If you plan on using OSC with it - consider adding Astronomik L3 filter in the mix, or simply get triplet for imaging.
  10. I'm asking because I saw some minor fringing on FPL-53 models as well when using OSC
  11. What scope is that? I see that you have 102ED F/7 in your signature, so I'm guessing that one - but what model is it? FPL-51 or FPL-53?
  12. So the actual mount does not have hole to accept the CW bar? Not heard that before, but good to know that it can happen.
  13. Well, one of great skills of processing is to push the data only as far as it will let you. If image is too noisy - then you probably stretched a bit too far. There is nothing wrong with having a bit of noise in the image. Also - knowing how much noise can be denoised without significant impact on the image quality is another skill that is worth learning. In the end - you should do masked denoise. Noise levels are intimately tied to signal levels in the image - you should denoise only fainter parts of the image. You do that by creating layer mask - using luminance of the image as mask itself. Here is an example - this is lum layer of NGC7331 from couple of years ago: That is streched as data will let me. If I try to stretch a bit more - look what happens: Background noise starts to pop up. Can I deal with it? Well, I can denoise, sure, but look how softer image becomes: Let's mask things - and apply denoising only to background: This is my layer mask: I'm using inverted mask here - intensity - so where it is bright - denoised version will be shown. In the end, blended it looks like this: In the end, I just restore black point and You can also do above mask trick with sharpening. You can really only sharpen high SNR areas - and if you sharpen the background - noise pops up. Do inverse masking this time - so that background is from smooth image and detail is from sharpened image. There you go - stars and detail pop up a bit and noise and background is held at bay.
  14. I actually like it quite a lot. There are couple of things that would normally bother me (like background color variance and color noise) - but seem to work in this image quite well and don't bother me nearly as much.
  15. Here we have to be careful what we mean when we say OSC binning. OSC binning is possible in every sense of the word binning - but it is a bit different to mono binning. We expect two major things to happen when we bin: 1. we expect resolution / sampling rate to go down 2. we expect SNR to go up Both of above happen for mono data predictably - if we bin with certain factor - say N (x2, x3, x4 ...), then - resolution goes down by factor N and SNR improves by factor N for mono data. For OSC data - it is not quite like that, because OSC data is "sparse" data to begin with. We think that we sample at certain resolution / sampling rate with it, and that our sensor has "that much pixels" - but with OSC cameras that is not quite true. We only have 1/4 of total pixels being red, and 1/4 of total pixels being blue and 1/2 of total pixels being green (for RGGB type Bayer matrix - there are other types that don't use RGB or use it differently - but all astro OSC cameras use some sort of RGB with two green pixels). So for OSC things are as follows: 1. If we use interpolation debayering - one that gives us "regular" resolution / pixel count for sensor - and we bin - following will happen: - resolution / sampling rate will go down as N - SNR won't improve as N. This is because SNR improvement expect "pure pixels" (non correlated), but when we interpolate - we introduce pixel correlation - 3/4 of red pixels are in fact just derived from that 1/4, so they are not valid "measurements". This is a bit like - copying some of subs and then stacking that and expecting SNR to improve - that won't happen as we don't have new data - just several copies of old/existing data 2. If we use split / super pixel debayering - one that gives us only half of resolution of the sensor (or 1/4 of pixel count) - resolution / sampling rate will go down as 2*N - this is because we "acknowledge" that our sensor is in fact "sparse" and that it is already sampling at half the resolution for each color (here we treat green as two "colors" - G1 and G2 - but we will stack them on the same stack so green will be slightly better than other two colors in terms of SNR - which is good as it carries the most luminance information). - SNR improvement will be as N ------------------------ With above - we can see that super pixel mode debayering is not really 100% like binning. It does reduce sampling rate, but it does not improve SNR over interpolation debayering as much as binning does. In fact - it only improves green channel as it "stacks" two green components, but red and blue remain as they are. You really should not get too hung up on bit count - it is really not important if you handle your data properly. Many people use ASI1600 and produce excellent results - and it is only 12bit camera. Each time we stack - we increase bit depth of final stack, so even 12bit camera can easily produce 20+ bit precision image.
  16. What seems to be the problem there? If you are using IR Pass filter - you should treat your data as monochromatic. If your IR Pass filter - passes all above say 800nm - then you can see that all three colors have roughly the same response - your image will be in effect - monochromatic. That also means that you won't be able to get any color on that image. Did you resize your images? Neptune seems too small in comparison to Uranus. It should larger than half of Uranus - but in above images it looks like it is ~1/3 of it by diameter (according to Wiki - Uranus angular diameter ranges in 3.3″ – 4.1″ and Neptune's is 2.2″ – 2.4″ - so even in worst case - apparent diameter will be larger than half of that of Uranus).
  17. Those are just denoising artifacts. Try processing your image without denoising algorithm applied. In a busy star field denoising can blur out faintest of the stars and if they are ordered in chains - that starts to look like strand of light. You might be slightly over sampling as well.
  18. That should not be too difficult with 10" scope. How bad is your LP? I've managed to detect it from red/white border (SQM 18.5) - with 8" scope. Just make sure it is positioned right and transparency is good.
  19. I think that light loss is quite strong. It could be considered NB filter. I think it will be good for achros because it is centered at 540nm. Most achros are optimized for that wavelength so not much spherical. Filter is narrow enough so that CA is not issue at all. I used it visually with 102mm F/5 achro on lunar and view was razor sharp. It has better resolution than say Ha narrowband often used for lunar imaging because of wavelength. Alternative is to use OIII NB filter. It is also good - at 500nm, again that is zone of good correction, and wavelength gives good resolution but it is more into blue part of spectrum where atmosphere causes more disturbance. Solar Continuum also has wider FWHM - which means that it passes more light.
  20. You should try Solar Continuum with achromats for the moon, if you have one. I think that is the best filter for imaging the moon with achromats.
  21. Seen that, but I still don't have the funds for it - that purchase is going to have to wait another year or two.
  22. It is IMX290 based camera - so 2.9µm pixel size. Ideal F/ratio of 11.4 - very close to F/12 on that Maksutov. @Mr niall I would just check gain settings for that camera. Not sure what is regular range of gain values - but you want higher gain to lower read noise. Attached settings also show that you used white balance and gamma settings? It is best if you leave all of those on default values - you'll fix color in post processing. Maybe also uncheck sharpen option in AS!3.
  23. I don't really know how they perform. Based on specs - they should perform fine but we can't be sure without first hand experience.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.