Jump to content

wimvb

Members
  • Posts

    8,950
  • Joined

  • Last visited

  • Days Won

    7

Everything posted by wimvb

  1. With a ”classical” narrow band filter on an osc camera, you only use a quarter of the pixels during Ha capture, and the other 3/4 during Oiii capture. With a dual band filter you always use all the pixels. If you image 1 hr with Ha and 1 hr Oiii with an osc camera, you need 2 hrs imaging time for 1 hr of data. With a dual band filter, you get 2 hrs of data during those 2 hours.
  2. I think that it very much depends on what you wish to achieve. With the European weather being the way it is, you really have to plan photo sessions. That extra night where you had thought to also collect Oiii for an object, may never materialise during one season. Osc with dual band may just be simpler than mono and a filter wheel. Only you can make that decision. I find that the development we have seen in astro cameras forces us to reevaluate old truths. What was an obvious choice yesteryear is no longer so. just a few years ago, coming from dslr, a mono camera seemed the obvious way to go for me. But my next camera will be an osc.
  3. Mono with nb filters seems a straightforward choice, but don’t disregard osc with a dual band filter without at least looking into it. Osc cameras are sensitive enough to allow thinking outside the box, I’ve seen good images that were taken with osc and dual band (Ha and Oiii) filters. Direct comparison is always difficult because of local lp conditions .
  4. Fyi, I just got guiding rms down from just below 1” to 0.5” (and down to 0.44”) by very carefully adjusting dec backlash. Previously I had improved RA by adjusting the belt tension. Just saying that a lot can be done to improve tracking without the need to disassemble the mount. But if you do need to take your mount apart, it’s good to have a guide like this.
  5. If this is a diffraction pattern then larger pixels = closer spacing of the light dots. And smaller pixels = wider spacing of the light dots.
  6. Whatever is fixed to the front cover window/plate, will not tilt when you adjust the sensor. Ideally only the sensor will tilt. So it will be the only reflection changing. If everything is perfectly aligned and parallell, all reflections will end up on top of each other, and into the hole of the collimator. It should work in theory. The question is; does practice agree with my theory? If the sensor cover slip is not parallell, it will still move together with the sensor, because it is fixed to it. You have the same problem with figuring out which reflection is which, independent of the angle of incidence.
  7. I use the INDI platform (similar to ASCOM but for Linux) and Ekos/Kstars for image capture. PixInsight for stacking and processing. Ekos/Kstars runs on a Raspberry Pi and is free software. It has routines for pointing (align module with plate solving), focus, image capture, and guiding (but can use PHD), as well as a scheduler for automatic observatory control. https://www.indilib.org
  8. I think that Göran meant the S/N up, or noise down. It seems to me that you have hint of galactic dust in the background. What was your total integration time?
  9. Autofocus routines in astrophotography use the width of a point source (star) to set focus. They try to minimise this width. If you want to use autofocus during the day, you can perhaps do so on a reflection of the sun, or a very small light source at considerable distance away from the telescope. I've never heard of anyone trying this, though.
  10. I wonder if a well collimated laser collimator would work to adjust the sensor tilt. It might need a considerable extension to work. But the idea is the same as ordinary collimation; get a reflected light beam at the correct location, and stay there when rotating the optical elements.
  11. Where can I get shares in IKEA?
  12. You could try combining R and Ha in the linear stage. Just linear fit Ha to R.
  13. Thanks, Olly. Once you know the principle behind a process, the funny names conjured up by the Spanish Inquisition also start to make sense, don’t they? 😉
  14. Here's a very quick try Set equal weights to R, G and B in RGBWorksspace Extracted R, G, and B from M31_LRGB Linear Fit of HA to R Clone of HA, took away layers 1 - 7 in MLT, only large scale structure left. Did the same with R (removed layers 1 - 7) PixelMath: R + (HA - HA_clone) (this combines the large scale structures of R to the small scale structures of Ha) Named the new image HaR Channel combination of HaR with original G and B In retrospect, I should have masked off M32, M110 and the stars, because they got a blue cast.
  15. Only to get a clear view of anything that you wish wouldn't be there. Never in the calibration process. The same goes for stretching flats. You only do it to evaluate them.
  16. Just try it. Take a colour image and apply MMT to chrominance in one preview, and color saturation in another. Look at what happens to stars vs the main target. Try with a negative bias on layer 1. In chrominance you can often use a larger bias than in luminance.
  17. The ASI183 (MC) has rather small pixels, and also a lower full well than the 533. This means that the 533 has a higher dynamic range. You either have fewer bloated stars or more faint detail in each sub, depending on which exposure time you set. The 533 also has a square sensor, so you never have to rotate from landscape to protrait. Personally, those two advantages alone would make me prefer the 533. Btw, if you go for the 183MM, you can bin 2x2 the small pixels and (theoretically) get an increase in full well and dynamic range. But you need filters. The OSC version of this camera doesn't have that advantage.
  18. Sometime ago I mentioned that I use Multiscale Median Transform (MMT) in PixInsight to enhance local contrast and lift details. At that time I wrote that I would try to put together a write up of my procedure. A few days ago, I mentioned MMT again, and after a reply by @ollypenrice, I wrote I would put something together this weekend. Now weekend is coming to a close, so I better keep my promise. So, here goes. When doing image processing in PixInsight, you have the possibility to work with wavelets, or levels of detail. PI has methods to divide an image into layers of detail. (Note that layers in PI has nothing to do with layers in PS or GIMP.) Any signal variation on a single pixel level goes into layer 1. Any signal variation that is "two pixels wide", goes into layer 2; four pixel wide into layer 3, etc. Since most of the variation that is on a single pixel level, is due to noise, layer 1 will almost allways just consist of noise. Even layer 2 will contain a lot of noise, but also the smallest stars. PixInsight has three processes that work with wavelet layers: HDRMultiscale Transform, Multiscale Linear Transform, and Multiscale Median Transform. The first process is normally used to control contrast in very bright regions, such as the core of a galaxy. The other two processes are used for noise reduction. Here I will show how Multiscale Median Transform (MMT) can be used to enhance details and local contrast. (MLT can also be used to enhance local contrast, but as a side effect, it can produce dark rings around highlights such as stars. MMT doesn't have this side effect, and that's why I prefer to use it.) There is another process in PI that also enhances contrast (and which indirectly also works with layers). This process i called Local Histogram Enhancement (LHE). I find that this process most often increases the noise in an image, because it works on all layers up to a set limit. In MMT, you can determine which layers (which level of detail), the process should work on. MMT allows much finer control of the contrast enhancement process. So, let's start. As an example I take a crop of one of my images, the central region of M94. This is not a particularly good image, but it will do. I have already processed this image to the nonlinear stage, including deconvolution and stretching. For the sake of clarity, I will somewhat overdo the process. The aim here is not to get the best image, but to show the method. At this stage, there is detail in the core, but it is too bright and has lost contrast. I also want to lift contrast in the outer regions of the galaxy to better show the spiral structure. I will do this in two steps. In step 1 I will enhance the outer regioins. There is a hint of a spiral structure, but overall this region is flat. To protect the very edge of the image, and the core of the galaxy, I create a range mask that protects these regions, as well as the stars. Next I apply MMT. To lift detail, I set the bias value in the layer I wish to enhance. Layers 4 through 6 contain the signal that I want to enhance. This will brighten the image, so I adjust that by giving the residual (the background of the image) a small negative bias. I found the values for the bias and to which layer to apply by testing on a preview. For step 2 I create a new mask, which only exposes the core. Again I apply MMT, now with these settings Again, I determine the best settings by experimenting on a preview. This is the result. Some final remarks. When you use MMT to enhance contrast, you need to at least mask the stars. MMT will brighten stars as well as the main object. The star mask should cover the stars and any halos. The first time you apply MMT to an image, even a preview, it will calculate the wavelet distribution of the entire image, regardless of the size of any preview. This means that MMT will appear very slow. But it will do this only once. If you re-apply MMT to a preview, it will execute much faster. In this example I applied MMT to the luminance of the image. The image was already monochrome, so it would be applied to luminance anyway. If you have a colour image and want to enhance detail or contrast, you should set luminance as the target. MMT can also be used to enhance local colour saturation. If you for example wish to enhance the colour of bright young stars in a galaxy, you determine which pixel scales these have, and set the bias of the layers you wish to enhance. You then apply MMT to the Chrominance of the image. Unlike the Colour Saturation process of PI, this allows you to target small regions for colour saturation.
  19. +1 for the binning. Unless you really need the smaller pixels, leave/set the camera to bin 2x2 with 4.63 um pixels. There is no real benefit to using bin 1, and the large files eat hard disk space. I have the zwo version of this camera and at bin 2x2 the files are manageable on a 5 year old laptop. The 90+ MB files at bin 1x1 take a lot longer to process.
  20. Any further comment on the term would be more appropriate in the ”bad wordplay jokes” thread on this forum. I’ll refrain.
  21. Ah, now I know what a photon hoover is. I’d better not use this phrase on this family friendly forum again. Sorry f/2 owners if I’ve offended you in the past. I didn’t know. (😉😉)
  22. This is how I fasten my sbc and usb hub to the aluminium plate. The usb hub is needed with my Rock64 sbc, but not with the raspberry pi.
  23. I have my sbc and a power distribution box on a 5 mm thick aluminium plate (10 cm wide) on top of my scope (mn190, and separate for TS70EDQ). The plate stiffens the construction and is a platform for control hardware. I don't know if the Pegasus has anything to attach it.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.