Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

wimvb

Members
  • Posts

    8,847
  • Joined

  • Last visited

  • Days Won

    5

Everything posted by wimvb

  1. This is the super pixel deBayering I described earlier. It is an option during image calibration in pixinsight. The only advantage this process has is that it doubles imaging scale and decreases file size. Only in the green channel do you get a slight increase in SNR. There is no improvement in detail, other than perceived.
  2. A great image already, Rodd. If light pollution isn't too bad, go for the luminance. Otherwise shoot more blue and create a synthetic luminance. Good RGB filters tend to limit LP, because of a gap between the R and G filters at the Na and Hg emission lines.
  3. crop to remove a narrow black border dbe using division as correction method background neutralisation HSV repair script cloned image to create two identical linear images, A and B image A: histogram transformation in several steps curves transformation to boost colour saturation created a mask based on the red channel image B: arcsinh stretch to retain colours in the stars histogram transformation Then I blended image B into image A using a weak starmask (brightest area in the mask kept down to 0.25). I used PixelMath for this. Colour boost with the previously created mask DBE to get rid of a persistent gradient resample to 50%
  4. Pixels on sensors only detect the light that falls on them. They don't care about the colour of that light. The colour information in an OSC (one shot colour camera) is contained in the Bayer matrix. On top of each pixel sits a dot of dye (reg, green or blue) that absorbs all the other colours. If you put another filter in front of the camera (eg a red Ha filter), virtually no light will reach into the pixels that have a green or blue dye dot on top of them. Only the "red" pixels will register light. In the deBayering process, the information of the red pixels is used as colour information for the green and blue pixels. You can make B/W images with a colour camera by just ignoring any colour information. The problem is that if you don't deBayer even mono images, the pixels will show through. This is because the pixels have different sensitivities for the three colours. If you were to take a B/W image of a neutral gray wall, the green pixels in your colour camera would register more signal, simply because all the pixels are more sensitive to green light than to red and blue. That's why a raw image from a colour camera looks pixelated. Here's an extreme crop of a raw image from a dslr that has not been deBayered. It's B/W and you can clearly see that adjacent pixels have different intensities, even though the subject had much less detail.
  5. Interesting. What was the output file format? In other words, did the camera do the deBayering?
  6. I had a go at your original data with PixInsight. The process I used enhanced reflection halos which I haven't corrected. They can be reduced with an appropriate star mask.
  7. In principle, super pixel deBayering could be done in the camera, it just need a processer to do so. If the calibration frames are treated the same way, then it would still be possible to calibrate and stack as normal. (This is equivalent to processing (lossless) jpeg images from a dslr.) But in reality, the sensitivity in the colour channels is different, and the monochrome image you get from binning RGGB data will be weighted by the transmission characteristics of the dye that is used in the Bayer matrix, as well as the QE of the pixels. If 3x3 binning is used in an OSC camera, these colour differences may actually make it into the final monochrome image, since some binned pixels will have more green, while others will have more red or more blue, depending on the position on the sensor. I would want to see the result before I'd be convinced that this actually works, as this kind of binning could very well lead to Moiré patterns. But maybe we're digressing from the original post. I probably should have clarified that, rather than just imply it.
  8. You can use this image as L in an LRGB process. Just stretch the original image using arcsinh stretch and boost colour. Then blur (mlt - 2 layers, or convolution) and apply LRGB combination.
  9. Actually... You can't bin a colour camera in hardware due to the Bayer matrix. But you can do it in software during the calibration process. This is called superpixel deBayering, and the colour information of one 2x2 pixel group (RGGB) is used to cleate one colour pixel without interpolation. The only advantage that I know of this technique is to lower the resolution of an otherwise oversampled image, giving a more realistic pixel scale and smaller files to work with.
  10. The cc to sensor distance should be 55 mm and is achieved by the t-ring that attaches it to the dslr. For most dslrs, the sensor to flange distance is 45 mm, and the thickness of the t-ring is 10 mm, giving the correct distance. This distance is not as critical as eg a field flattener's. Unless you have more rings/filters in your imaging train, you should be fine. The colour cast may be due to light pollution, exaggerated by the image conversion software. My favourite raw conversion software is Rawtherapee, which is free and available for windows, mac and linux. On sw telescopes with the standard focuser, locking focus can lead to focus shift. One way around this is to use a motorised focuser, where the holding torque of the motor will keep the draw tube in place. Hope this helps.
  11. The diffraction spikes are still single. They will split if focus is (too far) off, so focus is still decent. Maybe it's just poor seeing or guiding? Nice image anyway.
  12. Very nice indeed. I agree with @tooth_dr, you could turn noise reduction down a notch. You also seem to have lost sharpness in the very heart of the heart. But all in all, a great image. Just for fun, have you tried the kappa-sigma noise reduction in MLT in PixInsight? It's very easy and gives very good results when used on the linear image. Use it with either a mask or the built in linear mask of MLT.
  13. Imo there is no point of diminishing returns. The more time you spend on a target, the better results you should expect in terms of signal to noise. If 4 times as much integration time gives you double the signal to noise ratio, it will always do that, no matter what SNR you start with. That's as far as theory goes. I think that in the end it's the astrophotographer's patience and perseverance that determine the point of "this is it for me" . Despite Jerry Lodriguss article, the time people spend on a target does not depend much on the Bortle value of their skies. I could be wrong of course. In other words, spend as much time on a target as you need to get to a result you're pleased with, and forget about theory.
  14. That's getting along very nicely. Will you add another 12 hrs, or continue on your Sadr mosaic? Add colour to that perhaps?
  15. This article may be of interest. https://www.skyandtelescope.com/astronomy-blogs/astrophotography-benefits-dark-skies/
  16. I was afraid of that. I found out the hard way that my eq3 needs the chinese gunk (aka lubrication, aka grease) to operate smoothly. The ASIAir also supports (some?) dslrs, but operation may not always be smooth, afaIk. You can run just guiding from a raspberry pi. Both PHD and Lin-guider are available. Phd seems to behave slightly better if you install INDI (indilib.org) INDI is an open astronomy tools protocol that runs on the Linux OS. It works with several client software packages, such as Ekos/kstars, Cartes du Ciel/ccd du Ciel or Pixinsight. INDI and Ekos/kstars can run on a single Raspberry Pi, together with guiding software. You can use a remote dedktop program to access the Pi, as I wrote before. You can also run the client program remote on a Windows computer. Commercial solutions, all based on Raspberry Pi, are StellarMate, ASIAir, and AtikBase.
  17. You cab control your entire setup from a RPi, and control that with Remote Desktop from your house. That's how I do it. Cost is a RPi, housing, sd card and eqdir cable. Plus investment in time to set it up. Phd also runs on a Raspberry Pi.
  18. The eq35 actually has a slightly higher load capacity than the eq5, but depending on its internals, its tracking may be poorer. I have tuned my own eq3 pro and a heq5 mount. The heq5 has ball bearings on its axes, where the eq3 doesn't. I don't know about the eq5, but since the eq35 is based on the eq3, it may lack those bearings as well, which will affect tracking. The price difference isn't that large, so it would be wise to investigate these differences further.
  19. Not for long, I'm afraid. It all depends on your budget, but I've said it before, and I maintain that the eq3 (pro) is a very nice mount for a light weight, short focal length setup. Since your imaging scale (how much of the night sky each pixel covers, measured in arcseconds/pixel) is probably quite large with your intended scope, it may make sense to go for the smaller mount and add a guiding package (eg finder guider + zwo asi120) now, with the knowledge that you won't be able to chase the smaller galaxies. For that , you will later need to replace your entire setup. But you will get better results on wide field targets sooner. And you can keep the light weight setup as a grab and go solution to take with you to star parties and on vacation. With guiding also comes the need for a computer next to the mount.
  20. That latest verion is a lot better as a b&w image, and with your signature border. 👍 With the rgb you may have to tone it down, but try to keep the cloudiness in the right hand side panel.
  21. My guess is that it's the onset of microlens diffraction. If you image brighter stars it will show more clearly.
  22. Does the camera have a mechanical shutter that can cause the banding?
  23. The original guide graph shows a much stronger dec drift than the backlash measured by the guiding assistant. Nevertheless, first order of business would be to reduce that backlash.
  24. That must have affected the Oiii data a lot. The Oiii data probably, but the Sii is deep red, and shouldn't have been affected any more than the Ha. Both Oiii and Sii have an average pixelvalue of about 0.2 (scale from 0 to 1), while the unstretched Ha was much lower. What do the single Oiii and Sii subs look like, compared to their masters?
  25. Here's my attempt using PixInsight The Oiii and Sii seemed already stretched. alignment of the channels Ha stretch (applied screen transfer function as permanent stretch) sho combination scnr green saturation boost noise reduction star reduction resample
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.