Jump to content





  • Posts

  • Joined

  • Last visited

Everything posted by old_eyes

  1. I think that is right for broadband imaging, where you are picking up the 'real' colours of stars, nebulae etc. And even then you have to compensate for filter bandpass and detector quantum efficiency. That's why Pixinsight has tools like Photometric Color Calibration, that matches the stars in your image to their known colour (from a database) and applies corrections to get a more 'realistic' white balance. For narrowband the general desire is to highlight the differences between the emission lines. So, for example, Ha and S2 are both red and the distinction is subtle to the human eye. In most narrowband imaging you want to make the difference more obvious, so you map the narrowband filter images, not to their 'correct' colours, but to something that stretches the differences. Hence all the different narrowband palettes that people use, and that @vlaiv is teaching me more about. Some people like to replace the stars in narrowband images with RGB stars because they find the 'unnatural' star colours that result unpleasant. But I am also not sure about an image that has a clearly remapped nebula floating in a field of broadband stars. That looks equally odd to me. Once you are no longer mapping to 'real' emission colours, it does not matter if you combine non-linear images. There is no reality to maintain, and it gives you the chance to adjust the relative strengths of the different signals in search of a more effective communication of the target structure and characteristics. As I said, it is not clear to me whether you should stretch first and then combine or vice-versa, but the consensus is that the job is easier if you stretch narrowband first. What you don't have to worry about is preserving colour balance, because there isn't any - except the balance you choose.
  2. The consensus seems to be to channel combine linear images for RGB and non-linear for narrowband, although I have not heard a good explanation of why. I like using the masked stretch because it gives a similar background for each image, which should mean the combination is more neutral. I think what I have done with the 'normalised' image is to reduce the red and green channels (by the division process) and boosting the blue because its divisor was 1. I get a similar image if I simply multiply the O3 blue channel x 2. Except there are more magenta stars.
  3. Thanks for that. Your explanations are excellent, but I do sometimes feel I am clinging on to understanding by my fingertips! If I take away the normalisation I get this: Brighter and warmer. an interesting effect, I ned to go back through the matrix maths to understand what I have done.
  4. Quite happy to say I overdid the saturation. Something I am prone to. I have not yet mastered visualising what the XISF image will look like when crushed into a JPG or PNG. Comparing the XISF with the JPG of the original channel combination posted above, they are significantly different. JPG shows higher saturation and a differnet colour balance. In fact, having now exported that intermediate image as a JPG I like the general balance of this one best. I might go back and do the MMT processing without further pushing saturation and contrast.
  5. I am not too worried about some green, it is the dominance I try and avoid. This is the result after initial channel combination: Process to this point was: WBPP calibration, registration and stacking. Dynamic crop to remove dodgy edges DBE EZ Denoise Masked Stretch Channel Combination (using your example weights, but dividing by the sum of the weights for each channel to normalise) Processing after this image to get the final version. Curves transformation to increase brightness and contrast plus initial saturation boost Starnet to create starmask and starless version. Dark Structure Enhance Extract luminance from starless Further contrast and saturation boost with Curves Transformation MMT (6-layers) to reduce noise and slightly boost structure Dark Structure Enhance, MMT and CurvesTransformation on Luminance image to get as much detail as I can LRGB Combination Pixel Math to reunite starless and star_mask Export to JPG Tweak contrast and brightness in Affinity adjust JPG
  6. I tried @vlaiv's methods on some IC1396 data. This is hte image. Processed as normal, but no changes to colour balance except increasing saturation and adjusting contrast and brightness overall. Many other faults no doubt, but a nice range of colours.
  7. Now that is clever! Thanks for explaining. I was stumbling around trying different blends without any framework. This is excellent and will be tried as soon as possible!
  8. Gorann - I'll give that a try. I have used HOS, but find it underwhelming on many targets. A 50:50 blend of HOO and SHO often gives an interesting starting point, but it depends on the target.
  9. Hi Olly, Of course. When you map the strongest signal to the green channel you get a predominantly green image. However, it often does overwhelm the signals from the other filters, and as @vlaiv says, it is where the human visual system is most sensitive. The result can easily be an acid green image with little colour variation. I am looking for a process that preserves data, but allows stronger visualisation of the different narrowband components. I have found for some targets a 50:50 blend of HOO and SHO produces an image which better balances the filters. Perhaps I just need to mess around more with the palettes to find something that works for each specific image.
  10. Thanks for the explanation of human visual systems and the role of the green channel. I agree with your points, but aesthetically (rather than scientifically), dominant acid green does not look that good. I am looking for a way to preserve the Ha data, whilst producing a more nuanced appearance. It is not the straight Hubble palette, but I have not found a general purpose alternative yet.
  11. Thanks for the link. SCNR is not quite as brutal a process as I had imagined. Clearly, it can be applied with some subtlety. At least I now have a better understanding of what SCNR does.
  12. Have to agree with @Adam1234. The Veil is a perfect subject for pure HOO palette. Green and blue channels 100% O3. Here is mine from earlier this year.
  13. I have a bit of a problem of image processing philosophy, and now I am tangled up in knots. When I look at narrowband workflows, I see many people using SCNR in Pixinsight to reduce the green from a strong Ha signal in the Hubble palette. Often quite agressively. But it seems to me that this is useful data that you have painfully collected, and now you are throwing it away. You chose to map Ha to Green and now you don't like the result. But is it right to 'zap' the green? Part of the problem is I don't really understand what SCNR does. When it eliminates a green pixel what does it replace it with? Is a better approach to make a green colour mask of the image and then adjust its hue to be more pleasing? At least then you are maintaining the intensity of the signal, just shifting where it maps on the colour wheel. Maybe I don't understand, but SCNR just seems a bit heavy handed. Any thoughts from narrowband and Pixinsight gurus?
  14. Narrowband exposures vary with object. Some have Ha and O3 but almost no S2 (Veil and Crescent nebulae). Some are rich in Ha and S2 but weak in O3 (California - at least when I image it!). SOme are rich in all three. If I am dealing with a target with emission across all three filters, my starting point is 2:3:4 Ha:O3:S2. The easiest way to find out it to look at other people's renditions of your chosen target - here or on astrobin. That often gives you an idea what works. For example, layered over the strong Ha of the FLyign Bat nebula, is the faint O3 signal called the Squid. You need insane amounts of subs to pull that out of the background. So the answer to your original question is - it depends. I always rely on those who have gone before me and publish the results of their efforts. And then when you have the data there is the whole question of colour palettes and blends. Narrowband is fun!
  15. First image using Pier 1 at Roboscopes. 10" Dall-KIrkham with 1678mm FL. ASI 1600MM Pro Camera binned 2x2. 240sec subs 15 L, 59 Red, 57 Green and 37 Blue. Processed Pixinsight. A lot of colour noise, but happy as a first shot. Got the balance of filters wrong.
  16. On reflection, I think I went over the top on contrast, saturation and sharpening. It looked fine on Pixinsight after several hours of processing, but now after conversion to JPG and seeing it on another screen it looks rather lurid. O think I like this version better: I guess the lesson is that image processing is like writing. What looks great after a long session may look crap the following morning. Don't be in so much of a hurry to post!
  17. I recently posted a four-panel mosaic of the Sadr region of Cygnus using data from Pier 14 at Roboscopes in Spain. I was intrigued by the way that the Crescent Nebula popped out and seemed to have a wave breaking over it. So I reprocessed just that section to get this. Trying, not entirel successfully to pull the wave out a bit more:
  18. This data was taken on Pier 14 @Roboscopes in Spain. A reduced FSQ106 at 387mm FL coupled with an ASI 6200MM Pro.It was originally part of a three-panel mosaic from Clamshell to Pelican, but the Clamshell part never looked rright and unbalanced the composition, so I have processed it separately. As a straightforward SHO palette image it was a bit dull (dominated by the Ha green). HOO was also dull as the O3 was rather featureless. However, a 50:50 blend of SHO and HOO images gave me a good starting point for this image.
  19. The problem with the Tak is the reducer we think. At it's native 530mm FL it should be OK, but reduced to 387mm we seem to be asking too much of the reducer. The problem with full frame cameras! I was a bit worried about the bottom right panel, but I have looked very carefully at the individual mosaics for the three filters, and there seems to be real data there, particularly in O3. I used DNALinearFit to bring the individual panels together, and with a close inspection I can't see a shift in the background across the overlap. It also did not seem to matter which way round the sub-panels I went with DNALinearFit. I think I am going to leave it for a while and get on with some other images. Mosaics from full-frame high-res CMOS cameras take some processing, and I don't want to go back to the start if I can avoid it . Thanks for the comments. And if you have another process for balancing panels at the mono and linear stage, I would love to hear it.
  20. This was from data taken on Pier 14 at Roboscopes. Reduced FSQ106 at 387 mm FL with an ASI 6200MM full frame mono camera. 2x 2 mosaic. The sensor is so big that we get coma at the corners of the image using a reducer. This makes mosaics quite tricky, particularly 2 x 2 where the corners of four frames meet in the centre. You have distortions going in all directions! However, if you stand back and look at the overall effect and don't pixel-peep, you don't notice so much. I like the way the Crescent Nebula pops out at the bottom with a Hokusai "Great Wave" breaking over it. Image is 7d 22' by 5d 43'
  21. I know it’s hard, but you have to believe. It looks like magic, but it just works. like they say, babysitit the first few times to give yourself confidence, then let it roll. I have used three different astro-imaging softwares, and they have all handled it impeccably once set up.
  22. This is the new raw mosaic after ChannelCombination: Nothing has been done to the image except Invert-SCNR-Invert to blitz purple star haloes, so plenty more to gain I hope! Edges look pretty good.
  23. So. I aligned the alls the subs for the various panels to the Ha filter and chose "high quality" in Mosaic by Coordinates. That did not improve things significantly as can be seen from the Ha mosaic at the edge: You can see the same problems in the boundary area. I next tried the Photometric Mosaic script and it seems to work well. No trace of the misalignment and fewer problems with stars on the panel boundaries distorting and leaving artifacts: I don't understand how it works compared to the Gradient Mosaic Merge, but it seems to be a success. Thank you @tomato and @Laurin Dave
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.