Jump to content

ollypenrice

Members
  • Posts

    38,260
  • Joined

  • Last visited

  • Days Won

    307

Everything posted by ollypenrice

  1. Before using a flat I always look at it. We have a pretty good idea of what they should look like and, occasionally and for whatever reason, they don't look like that. Be very suspicious of such flats because they are almost certainly wrong. Basically they should show some vignetting, a lot or a little depending on scope/sensor, and you can expect a few dust bunnies. That means a roughly circular brightening towards the middle with darker corners. Dust bunnies look like darker doughnut shapes. Remember that an unstretched flat will look very flat indeed. That's normal. Even in an extreme case of vignetting, the 'dark' corners will have 75% of the illumination of the 'bright' centre. This is not a glaring difference. When we stretch a flat, the difference does become glaring. That's why we need flats. Unlike normal photographers, we stretch our data and so do the same things to our images that we do to our flats when we stretch them - we make the uneven illumination glaring. Make it a habit to look at your flats and get a feel for what looks credible. Olly
  2. You're right. There is something nice about going deep in RGB rather than taking the Ha shortcut to deep nebulosity. The dusty stuff in the scene gets under-exposed and overwhelmed by the emission. Olly
  3. Talking about extremes, a friend about 20 miles away uses a 31mm Nagler for planetary observing. I have to stand on a box to reach that eyepiece... lly
  4. I've long taken the view that three EPs will do for one scope. Give me three good ones, though. In my long FL scope (14 inch SCT) I use only two, though I'd like a 40mm as well, more for the exit pupil than the field of view. The other thing is that, while you are finessing over which of your collection to try next, you are not doing what you should be doing which is concentrating on the object. Olly
  5. I think that the nice thing about BlurX is that it sharpens mostly on the finest scales and still allows the imager to give a soft-touch overall look. Unsharp masking inevitably introduces the hard look in my experience. Olly
  6. Indeed, but in my case it was just a way of boosting the single sub S/N ratio for my experiments. I'm just impatient to get hold of a stack! Olly
  7. Very deep indeed on the VdBs. We don't often see this one, the Heart and Soul grabbing all the attention, but it's a nice region. The Stock cluster is lovely. Olly
  8. Lovely atmospheric rendition. Good core, too. It's a very bright one and hard to handle. Olly
  9. This is a great test. I use it to demonstrate the purpose of flats when helping beginners.
  10. Great image but one small point: round stars do not indicate accurate tracking. They are consistent with accurate tracking but so is having equivalent guiding errors on both axes, even quite significant errors. Olly
  11. The L extraction is a good idea. I tried something similar with partial success. (Placed a Blux image on top of the standard in blend mode luminance.) Olly
  12. I used to be wedded to mono and still would be if talking about CCD cameras. Mono is not slower, it is faster, because the luminance run captures all colours on all pixels, which an OSC cannot do. Mono also captures NB data on all pixels, which OSC cannot do. However, CMOS OSC cameras are an awful lot more convincing than CCD OSC. I'm currently using two 2600 OSC cameras with great enjoyment. The dual or tri-band filters have also narrowed the gap considerably, though we don't have any here as yet. I think it's a much harder decision than it used to be, or maybe it's an easier one because the mono advantage is now much reduced and both options are great. Olly
  13. BLUR XTERMINATOR QUESTiON. Has anyone tried BlurXT deconvolution on Samyang data? I only have a single sub so far and it doesn't produce a workable improvement, though it seems to be trying to do so. I think it might work on a stack. If you have a stack but don't have BlurXT (which needs Pixinsight as well) and would like to try it, Dropbox me the linear stack and I'll run it and send it back. PM me. The holy grail would be that it would fix corners with the lens wide open. While we're on BlurXT, there's a PI script available here. https://www.skypixels.at/pixinsight_scripts.html It installs in Script - Render and allows you to derive a PSF for you image and insert it manually into BlurXT instead of using the auto PSF. Some say this improves the result. Olly
  14. In case you haven't watched one of the videos in this thread, there's a PI script available here: https://www.skypixels.at/pixinsight_scripts.html Once installed (it appears in Script - Render - PSFImage) it will give you a PSF for your particular image, allowing you to over-ride the auto PSF on BlurXT. Some commentators are suggesting that it out-performs the present auto PSF. I think maybe it did on the image I'm exploring at the moment, our single sub Samyang first light. My feeling is that you can't really expect it to work on a single sub, though it worked a bit better when the sub had been noise reduced. Olly
  15. No it doesn't! Oh all right, maybe it does. I can offer these. They are heavy crops of what is, for a TEC 140 (1 metre FL) at 0.9"PP, a small target. The first is the original, sharpened in Ps using iterations of unsharp masking on different layers. The second was sharpened first with BlurXT and then with a single iteration of unsharp mask. It has also been through StarXterminator for star reduction, so bear that in mind. My feeling is that the second has more delicacy and finesse. In the first, the main dust lane has more impact but that's because it has been artificially widened and darkened by USM. The BXT image has the dust lane in better agreement with R Jay GaBay's large telescope rendition here. https://www.cosmotography.com/images/wide_ngc891_2010.html My image had 7 hours luminance and 2 hours per colour, so 13 hours. That's a few hours shy of what I used to aim for doing galaxies in the TEC. Olly GGC891original.tif
  16. An option, but that's a very significant loss in light grasp even though 0.4 doesn't sound like a lot. 4 hours becomes 5.76 hours just in terms of photon count. Do you stop down with the diaphragm or with an aperture mask? Olly
  17. I think this is true with most new processing tools. In the early days of Pixinsight, people went absolutely crazy over its HDR wavelets, prompting Dennis Isaacs to ask, 'Why do all Pixinsight images look like pictures of human brains?' It's true, they did, but folks have eased off with the tool and now use it invisibly most of the time. I still can't get BlurX to improve my Samyang image. Olly
  18. The PI thing should be another thread. Let's stick to Blur X. Olly
  19. I don't think they realize what their language sounds like to most people. It's a progam written by geniuses for geniuses - so that's me out of the frame! Olly
  20. We managed a first light with our Samyang 135 last night. Most of the stars are very good and all corners hold up for viewing at 50% of full size. At 66% they look acceptable to me but I am absolutely not a pixel peeper and don't want to become one. At full size two corners show clearly distorted stars. However, I ran the image through BlurX because some users report improvements in distorted stars. In my case it made them worse on a first attempt but I need to try different parameters. I'll update this finding as I experiment. Note, I was working with a single sub and Blur X does require a good S/N ratio to give its best. Olly
  21. This is a solution I've used as well. It works, though my heaters were for puppies! (I'm really not a cat person.) lly
  22. Since you won't be able to try BlurXT without PI, feel free to Dropbox me a linear stack and I'll run it through my copy and send you the results back. Olly
  23. I find that there is often room for further sharpening after BlurX, notably on galaxy cores. However, these details need far less sharpening and this is good because other methods increase noise and produce other side effects. They are also very fiddly if done carefully since different details need different scales and parameters in traditional packages. (I don't actually do my extra sharpening in PI but that doesn't alter my answer in any way.) There is often room for further local contrast enhancement, too. This is, in effect, like a very large-scale sharpening process since normal sharpening also increases local contrasts but on a tiny scale. I find PI's Local Histogram Equalization excellent for this. I, ahem, sneak an LHE modded image into Photoshop to use it as a layer. Tell nobody! ) Noise Xt is now the only NR I use. It is also the only NR routine I have ever applied globally to an image because, frequently, it does no damage whatever to the brighter and more detailed parts. If I'm going to run a second iteration of it, as I sometimes do later in the processing, I'll do it as a Ps top layer and erase the bright parts to leave them unaffected. There will be an equivalent way in PI. Olly
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.