Jump to content

wimvb

Members
  • Posts

    8,949
  • Joined

  • Last visited

  • Days Won

    7

Everything posted by wimvb

  1. At about this point in the discussion, I'd recommend you get a copy of "The Evolution of Physics", by Leopold Infeld and Albert Einstein. A highly readable book. Paper copies are still available, but it's also available online. https://archive.org/details/evolutionofphysi033254mbp/mode/2up
  2. Absolutely not. Each pixel collects light from a larger section of the sky when you use a reducer. A focal reducer increases the pixel scale (arcsecs per pixel), by reducing the focal length, as the name says. There's no magic bullet here, the only way to get a larger aperture is to buy it.
  3. Perhaps. But at the same time, the comparison isn't completely off. Both instruments allow you to create great sound, but the larger one gives you more control over every aspect of the process. This is daunting at first, and the learning phase will be longer. But once you learn to use all the controls, you can achieve great results. One valid point against pixinsight used to be the lack of documentation. But imo, this point has lost its validity. I think you should try again. 🙂
  4. Just take one step at a time. Remember: the primary can't be off by much because of the way it sits in its cell. If in doubt, take images of how it looks and post here. Good luck.
  5. Thanks for the video link. The MN190 differs from the Comet Hunter in how the secondary mirror is kept in place. There are no hex screws to hold the retainer ring. I also would never put the scope vertical during collimation. It's all too easy for the secondary to come loose and fall on to the primary. If you search for MN190 collimation here on sgl, you'll come across one such incident.
  6. The MN190 differs from standard netwonians in that the distance between the corrector plate and the primary mirror is critical and should not be changed. That's why the primary mirror doesn't have springs, but only rubber rings between it and the cell. Also, you shouldn't move the secondary mirror up or down the tube. As a first step, see to it that the secondary is rotated right. There is a small circle on the secondary mirror that should be centered under the focuser draw tube. With my scope it's slightly off along the axis of the tube. This doesn't affect star shape, but increases vignetting. Second, align the secondary. Don't touch the central screw which holds the secondary in place. You should only need to adjust two of the three screws that move the secondary. Finally adjust the primary mirror. Again, only two of three adjustment screws need to be used. Because of the lack of springs, you can't move the primary much. Make small adjustments. As for tools, I use a barlowed laser for primary adjustment and a cheshire for secondary adjustment. Finish with a star test. Hope this helps.
  7. In the end, deconvolution didn't make much difference. Btw, here's your star profile in the unstretched image. It's a bit wider and softer than what I'm used to. It looks oversampled, but with a SW 80ED and 5.4 um pixels it shouldn't be this soft, imo. I always thought that RA oscillation in PHD was a measure of seeing, but I never really looked further into it.
  8. It's always a relief when things get a simple explanation and can easily be resolved. Good luck with your gear.
  9. After I posted my version, I went back to the linear stage and applied deconvolution. That improved the stars and added a little sharpness to some of the finer details. I haven't finished processing the image yet, as it was getting too late. I will post it late this evening.
  10. I found the stars in this image definitely not undersampled. In fact, they had a softer profile than what I usually have, almost as if focus was slightly off. But this can also be due to bad seeing and large guiding rms.
  11. My take on your data. Processed in PixInsight: Dynamic crop to remove most of the vignetting and stacking artefacts DBE to flatten the background. I used Sara Wagers image as a reference. 5 x histogram transformation with the gray pointer at 0.25. Instead of doing one bold stretch, I use several milder stretches. This gives me better control. I usually try to get the peak of the histogram past 0.1 (26 in PS), bringing in the black point when needed but avoiding clipping at all cost Multiscale Median Transform with a star mask applied to keep the stars from bloating. I used noise reduction on layers 1 - 3, which reduces noise on structures which are up to 4 pixels large. On larger structures I used a bias, which increases contrast. Like the local contrast enhancement that Olly wrote about, this darkens the darker areas and brightens the brightest areas, so I also added a bias to the residual layer (the whole image), brightening it a little. This way I avoided clipping the data. Star reduction Noise reduction curves transformation (click on the image to enlarge it) This is the image just before local contrast enhancement with MMT (just stretched with histogram transformation, levels in PS)
  12. Dark corners in your image. I'd say your flats are undercorrecting
  13. Regarding local contrast, here's the same target captured and processed by Sara Wager https://www.swagastro.com/ngc7822.html Btw, @Rustang to get the most out of your data, you should apply flats.
  14. In PixInsight, the image opens fine with normal stars.
  15. I agree with vlaiv. I just downloaded the image and opened it in windows image viewer (which doesn't stretch). Some star cores are just "rings". @Rustang, how did you stack the images?
  16. That edge at the bottom looks suspiciously similar to a reflection off the oag. Pull the oag stem a little further out and see if it changes. Also, what filter size do you use? Unmounted filters may have edge reflections that can mess up flats.
  17. While local histogram equalization (lhe) increases local contrast, it unfortunately increases noise as well. I've replaced it with multiscale median transformation (mmt) in my workflow. I find that mmt gives me much more control over where I want to add contrast; either in very small scales to enhance details, or in larger scales to enhance local contrast. Either way, it doesn't increase noise nearly as much as lhe. I should probably do a write up of it to show what I mean.
  18. Brighter nights are light pollution; they mean that you may have to decrease your exposure time from normal (but probably not by much). They also add more noise. The added light is subtracted by bringing in the black point. The associated noise is best reduced by increased integration time, or careful noise reduction.
  19. And the last image (DBE + black point) with an added S-curve, lowering the histogram peak to 0.075 and leaving the higher range unaltered. This isolates the main target a bit better, and gives a bit more drama. But again, it all depends on where you want to take the data.
  20. I loaded the jpeg into PixInsight and had a look at the histogram. The lower histogram is your image, the upper is with the black point taken in a bit. You have a large amount of unused dynamic range in your image. The black point can be brought in to 0.12 without clipping any data, which means that you throw away about 1/8 of your dynamic range. In a B/W image, that usually is not a good thing, but if you combine this image with other images to get a colour image, that may be ok. It depends on what you want the final image to look like. Here are two examples of easy things to do: 1. just the black point adjusted to below clipping, no further stretching (S-curves or anything similar) 2. removed a possible gradient, and adjusted the black point If you want to further enhance this image, you can start experimenting with local contrast enhancements or high dynamic range processing. That's all a matter of taste. If this is going to be part of a colour image, I wouldn't do that though.
  21. I edited my earlier post and inserted the configuation images for the diffraction calculations. As you can see, I exaggerated the thick spider and the distance of the double thin spiders.
  22. I'll see if I can dig up the configiration image that Maskulator used to calculate the diffraction patterns. But the thick vanes that I used were considerably thicker than 10 mm, and so was therefore the spacing between the double vanes. Addendum: I added the configuration images to my earlier post with the diffraction patterns.
  23. For you perhaps, but certainly not for me 😁. And yes, the colours are so far apart that there is no gain in resolution. Thanks for adding this, Vlaiv.
  24. As far as I know, the 294MC has four pixels under each Bayer matrix superpixel. This means that the extended Bayer matrix is RR GG RR GG GG BB GG BB As long as the camera is used in bin 2x2, this works just fine. But in bin 1x1, there is no program that I am aware of that can properly deBayer the image files. So even if you "unbin" it, there's no way you'd get a colour image out of it.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.