Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

vlaiv

Members
  • Posts

    13,106
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. I have no idea but suspect low bit count of data. Did you by any chance take PNG file that was posted online and is only 8 bit per pixel? That could explain artifact. Maybe convert it to 32bit float prior to DBE?
  2. I did all of them on my HEQ5. I figured that they can't be of highest quality, so I went to local shop and said to the guy working there - give me best of these. He just replied, you want SKF .... and gave me everything I needed to replace each of them.
  3. Hope you'll be ok! All OSC sensors behave the same - they have bayer matrix and are in principle sampling at twice the lower rate than pixel size would suggest. This is not something that is exclusive in dedicated astronomy cameras. Although single sub will sample at rate that corresponds to double pixel size - stack won't quite behave like that. In fact that depends on what sort of debayering one applies. Probably best way to do it would be Bayer drizzle algorithm or upsampled integration. I don't think that later is available as "well known algorithm" - but you could easily replicate it for testing purposes. As long as you can do split debayering (which does not use interpolation nor super pixel mode, but instead produces smaller images for each channel - and twice as many subs for green because there are two green pixels for each red and blue) - you can resample your images to larger size using lanczos 3 resampling and then integrate those - that will be closest thing to upsampled integration. Of course - bayer drizzle is alternative to be tried.
  4. While I'm against of such binning of CMOS data - in drivers, data is perfectly fine as long as you have stacking application that can handle it. You'll need calibration data also binned and you can continue shooting binned if you want to add more data. There will be little to no difference on final image quality if you bin your color data as human eye is more sensitive to detail in luminance than in change in color - this means that if luminance is shot at x1 bin - it will be sharp (provided everything else leads to sharp data - like seeing, tracking, guiding, sampling rate, etc ...).
  5. In this particular case - frame that does not get stacked is out of focus. Here is same section from both files at 1:1 zoom (one screen pixel - one image pixel, or 100% zoom). Notice star size in left image - it is quite acceptable - on the right it has at least x3 larger diameter - due to poor focus (could be other things - like particularly poor seeing or issues with tracking / guiding, but my money is on poor focus).
  6. I advocate using SER - because a) it is lighter weight format - it just uses 178 additional bytes for header and rest is raw frame data. There is optional block of timestamps appended at the end. AVI on the other hand is audio/video interleaved format that is container format for both audio and video data capable of storing variety of pixel and compression formats. From this it is obvious that SER will take less space for same pixel format. b) Given that AVI is what it is - it can contain any number of video formats together with loosy video compression. Although experienced planetary imagers will know to avoid that and use raw data even if it is recorded in avi - many other people won't know that and will happily choose AVI format. In doing so, they run a risk of using formats like YUV that has sub sampling or even some compression formats like MJPEG and such that introduce artifacts into recorded data. For the benefit of general public, instead of convoluted discussion on pixel formats and raw vs loosy compressed images - I just advocate SER to everyone. That way people avoid pitfalls associated with AVI. Have a look at http://www.grischa-hahn.homepage.t-online.de/astro/ser/ for description of SER. It is lightweight format that is particularly suitable for planetary imaging. In no way it adds overhead that will slow down capture - on the contrary - it will be faster than AVI.
  7. Don't capture in 8 bit unless you know what you are doing. Use SER. This paper discusses issues when doing stacking with fixed precision mathematics. Avoid doing stacking with fixed precision mathematics - use floating point for result of stacking whenever possible. It does not talk about what happens when you use 8bit capture format. For example in ASI174 there is no native 8bit ADC. It uses 10bit ADC when doing 8bit capture. 8bits produced are going to be MSBs of those 10 bits. You need to adjust your gain so that there is no data loss due to this clipping of LSBs. In order for 2 least significant bits not to matter - you need to adjust your gain to be 0.25 e/ADU. Since unity gain value is 189, you need to add twice 61 to that to get 0.25e/ADU value. This means gain of 311 in 0.1dB. You also need to examine bias at these settings and see if there is clipping to the left and adjust offset accordingly.
  8. Here is what general workflow should look like - but I don't use PI so I can't give specific details on how to do each step: 1. Load subs raw subs and convert them to 32bit mono image 2. Load calibration subs (darks, flats, flat darks) 3. do calibration as you would for mono image. Bare in mind that your calibration subs should also be mono. This means - created in the same way - load raw subs and treat them as mono when making masters. 4. Use such calibrated but undebayered subs in bayer drizzle stacking method. Regular drizzle works by taking each pixel and then "reducing" its size - thus creating "empty" space around it. It then aligns such pixels to output image and places pixels on it (drizzle pixels over output image). This is in principle what any resampling algorithm will do when you change resolution - but it reduces pixels even more - to a single point (in math terms) - it just does not drizzle such points on output image as that would be pointless (pun?) since points have no size. It works in opposite direction - it calculates expected value between point samples by applying reverse transform from output image (takes coordinates of output image pixel and calculates where it should lie on original image). In any case - regular drizzle won't work, or it will produce less SNR than resampled integration explained above. However, bayer drizzle will work - since pixels are already smaller and you don't need to add artificial space between pixels - it has it already - but the thing is - bayer drizzle won't produce larger image. It will produce the same pixel count image as regular debayering methods - only marginally sharper (if there is undersampling with bayer matrix in the first place).
  9. do you mind posting one of the actual FITS files that DSS refuses to stack?
  10. Ok, just realized that you are using color camera. Try doing this: That will make "tighter" stars. DSS does not like when stars are large - that happens when you have over sampled data. Super pixel mode will make them tighter as it reduces sampling rate (or rather applies proper sampling rate for OSC sensor - still high at 1.16"/px for 4" scope but better than 0.56"/px).
  11. Do you get any info back from DSS as to why subs were rejected? It could complaint that there are not enough stars found for registration, but it can also complain for other things? I know it complains when calibration frames are mismatched and similar.
  12. If you want to use Bayer drizzle - you should not debayer your subs first. Not sure what your workflow is, but Bayer drizzle requires mono raw calibrated subs (from OSC camera obviously).
  13. I like to examine images in full - that involves right click and open image in new Tab on my Firefox browser. There you can see actual title of the image and zoom to 100% or 1:1. Title of this image is NGC4565_drizzle_integration_crop_DBE_curves_PMCC_crop.jpg That kind of suggests what was done to it - among other things, drizzle was applied. Did you by any chance do Bayer drizzle - as that is something that would work. I'm not sure if PI has that algorithm. It uses drizzle instead of interpolation to debayer image and since it won't try to reconstruct detail smaller than a single pixel - just fill in missing data - it actually works.
  14. You don't happen to have a comparison? I wonder because I maintain that drizzling does not improve anything - just hurts people's data / end result.
  15. Nice capture. I see that you drizzled - according to title of the image. Why is that?
  16. Not sure if you can do what you imagined on such a budget. You could however do it both on a slightly bigger budget. Purchase planetary camera for ~ £200 - like this one: https://www.firstlightoptics.com/zwo-cameras/zwo-asi224mc-usb-3-colour-camera.html Which is btw, probably one of the best planetary cameras available - it has ~ 0.8e read noise, 75% QE and very fast frame rates - over 200fps. Then look for second hand Canon 450D or similar for deep sky. You can use above ASI224 with small finder scope as guide camera - so you can have guiding with minimal investment (you probably have finder - you only need adapter to mount ASI224 on it).
  17. I would say comparable to modded DSLR? Uncooled ASI294 is £718 which is ok for planetary but I would seriously recommend you to go for cooled model for dual role. Cooled ASI294mc-pro is £1000.
  18. I don't think that is particularly true - last bit. Let's take ASI294MC-pro. It's not mono camera + filters, but many people are rather happy doing DSO AP with this chip - certainly beats DSLR. It is fairly large as well - 23.2mm diagonal. Good camera for DSO imaging. Let's see how good it is for planetary: Read noise very decent at 300-350 gain - about 1.2-1.3e. QE is to be determined, but I think it is about 75-80% Frame rates are not stellar, but fairly decent. At 320x240 ROI, you can almost go full speed at 6ms exposure
  19. There actually are couple of them that would be good in dual role, I'll list them in descending order: ASI294 ASI533 ASI183 (color model, or even mono model, but I prefer color model for planetary imaging). For DSO astrophotograpy - you want larger sensor and cooled model (or rather set point temperature model). Good QE and low read noise is shared requirement for both types of imaging. Pixel size is not that important - you'll increase focal length with barlows for planetary and you should match pixel size primarily for DSO AP. For planetary you want fast download rates and ROI functionality.
  20. It is both integration time, but also something else - not often mentioned аnd spoken of even less often. Sharpness, or rather potential sharpness / resolution. If you look at these images made with 140-150mm aperture, including this excellent example just posted: you'll notice that none of them really goes lower than 1"/px. On the other hand, scopes are capable of much more and as planetary imagers know - if you have good SNR you can get sharper results than can be seen with naked eye in best of the seeing conditions. How come? There are tools that we can use to reverse impact of seeing and aperture (up to a limit) - and it depends on SNR just how much "frequency restoration" (fancy name for special kind of sharpening - that is reversing of impact of aperture and seeing) we can do. Also - larger scope have less of aperture impact - which compounds with atmospheric influence (in very convoluted ways). Large scope will provide needed SNR in same imaging time as smaller scope - it will give you potential for sharper image both due to aperture and also due to sharpening (if properly done).
  21. Could be that I'm wrong: And pictures show the same - USB3.0 is blue color coded - only right mount shows blue ports.
  22. Well - for all those who don't like guide scopes and prefer OAG solution - there is now new and improved version - still featuring USB 2.0 connections
  23. Better yet PI - integer resample x2 average
  24. Well, this gave me quite a headache Not sure if I made an improvement over your original processing - maybe a bit of color now shows. Lack of calibration was quite a bit of a problem - here is what backgrounds look like in channels: As you see, there is pattern evident, and I'm guessing it is bias - so dark calibration should remove it. I did some "trickery" and flattened the background. However red channel is too weak and very noisy and this shows. Here is color composition stretched: I used green channel as luminance (I like LRGB workflow better as it gives me more control over denoising and stretching), so here is luminance channel: Here I tried to minimize background noise and still show galaxy. Together - above color and this luminance combined: Both galaxy and stars show some color, but there is just too much noise in red so it dominates background. Like I said, I'm not sure if above is any improvement on your original processing. In any case, if you want to have a go at cleaned up and binned data, here it is: m51-c_00.fits m51-c_01.fits m51-c_02.fits
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.