Jump to content

stargazine_ep45_banner.thumb.jpg.71f13bfceacd5e3df366d82c2b6f5f9b.jpg

Recommended Posts

  • Replies 121
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

Here's an attempt.. the faint spiral arms are starting to show,  I'm sure there's more to be had in terms of colour and extension but I've been told I need to go and cut the grass   Pixinsight (bn, DB

@Rodd You have some wonderful data here. Still in processing on my part, but I wanted to share this stage to show you something. Maybe try not to push your data too much. Crucial areas of ga

Don't be disappointed. I'm after something else - I'm after "documentary" value of the image - what is the real color of the thing - one that you are likely to see with your own eyes if you amplify li

Posted Images

Very interesting thread, not least the Hubble colour comparison.  Decided to see if I could get my own, recent M101 image to look more like Hubble's.  Here's the two - my original, 'traditionally' coloured version, and the toned down one, done by desaturation and very gentle colour curve adjustment,  just eyeballing it.   Most of the pink hints in the Ha I have in the original regrettably disappears (modded DSLR camera), but which of the two do people prefer?

M101c crop-denoise.jpg

M101d crop-denoise.jpg

Link to post
Share on other sites

Same image cropped a bit for a closer look--still seems a bit "clayish" as Olly says.  But t least its no longer garish and it has respectable detail

Image05j3crop.thumb.jpg.f3259168850d434821fbd3060fbed6c7.jpg

Edited by Rodd
  • Like 1
Link to post
Share on other sites
14 minutes ago, Rodd said:

Same image cropped a bit for a closer look--still seems a bit "clayish" as Olly says.  But t least its no longer garish and it has respectable detail

Image05j3crop.thumb.jpg.f3259168850d434821fbd3060fbed6c7.jpg

I think it looks pretty good.

14 minutes ago, Rodd said:

"clayish"

Interesting description Olly😁

Link to post
Share on other sites
20 hours ago, ollypenrice said:

I do things a little differently from most people. For one thing I don't use decovolution. (I know this is odd!) This is how I did my version:

1 DBE in Pixinsight, then Photoshop.

2) Basic Log stretch and black point adjustment till the brighter background pixels reached 23.

3) Background noise reduction in Curves. Pin the curve at 23 and place fixing points above that. Raise the curve below 23 to taste, so lightening the darker pixels to reduce speckle.

4) Copy layer, working on the top one. In Curves, pin the background again at 23 and add one fixing point below that. Continue to stretch above that. (There is no point in further stretching the background sky.) This stretched faint stuff considerably above its noise floor but the medium bright stuff was still safe from noise.  Erase the top layer bright stars outside the galaxy and flatten.

5) Copy layer, top layer invisible, bottom active. Heavy noise reduction. I used Noels Actions, Reduce Space Noise. I want the faint outer arms and the regions between the spirals to be noise free. Too much NR is OK at this stage. We won't be applying all of it.

6) Top layer visible and active. Use Colour Select (it works on greyscale) to select the faint and semi-faint stuff where the noise is a problem. Expand by 1 and feather by 1. Set the eraser to 50% and erase the top layer where selected. Too much, too little? Go back and reduce the eraser percentage, not enough, repeat it/increase it, etc. If some small areas are still noisy remove the selection and use a small eraser brush to remove them to the percentage you want. (SO much better than masks!!!)

7) Sharpening. Copy layer, top layer invisible, bottom layer active. Select stars (Noel's Actions or MartinB's tutorial on SGL.) Expand and feather selection then select inverse so the stars are excluded. I'll now use a mixture of Smart Sharpen and Unsharp Mask, mostly the latter. I  don't look at damage it does to the lower brightnesses, I look only at the parts it enhances. I'll only be keeping those. Most of the image will look horrible but a few bits around the core will look great.

8 ) Top layer visible and active, erase at high percentage those areas which will stand it.  Reduce the percentage of the brush and erase those areas which will stand some sharpening, etc. Basically work down the brightnesses with an ever reducing percentage. It is often helpful to do the sharpening with different values in several iterations. Flatten and that's it.

Layers and eraser versus masks?  If, by some dark magic, you can create the perfect mask which goes just where you want it at the opacity at which you want it, then use masks. If you prefer to see what you are doing, where you are doing it, while you are doing it then use layers and eraser. I know which I prefer!

Olly

Edit Some of these methods cannot be used on targets with darker-then-background dusty regions.

 

 

Olly. Got around to reading this. You do know that you can apply s mask then hit “show mask “ and the mask is still active but 100% transparent?  Also, many tools have a dynamic mode so you get to see in real time what you’r actions do, then hit “execute” and They become permanent.  But masks are hard to fit.  I really should try PS. Thanks to Davey-T I have a copy on disc. I just don’t know where to start.  Anyway. Thanks for the workflow.  I hope to one day know what It means.  BTW do you agree that more lum (or rgb for that matter) is unlikely to be of significance here?

Link to post
Share on other sites
2 hours ago, Rodd said:

Olly. Got around to reading this. You do know that you can apply s mask then hit “show mask “ and the mask is still active but 100% transparent?  Also, many tools have a dynamic mode so you get to see in real time what you’r actions do, then hit “execute” and They become permanent.  But masks are hard to fit.  I really should try PS. Thanks to Davey-T I have a copy on disc. I just don’t know where to start.  Anyway. Thanks for the workflow.  I hope to one day know what It means.  BTW do you agree that more lum (or rgb for that matter) is unlikely to be of significance here?

No, I think more of both would help the image but it would need to be quite a lot more to make a big difference. I say this because I've just processed an M101 dataset captured by another member and that went a little deeper and would take more sharpening. The big difference was that his was shot from some altitude in the Alps under a dark sky. You're fighting skyglow and your only weapon is exposure time, as Craig Stark has discussed persuasively.

I do think you're getting there with the image and I like your latest one.

If you look at the third step in my processing method you'll find a very effective technique for reducing background sky noise. Because it involves no pixel-to-pixel communication it is not a blurring technique and does not produce the oily, 'vaseline on the lens' look. There are other ways of reducing noise with less pixel-to-pixel communication than the NR algorithms, too, but this one is easy and retains exactly the same 'grain pattern' as the original but just reduces by as much or as little as you like. I found it a big help in your luminance image.

6 hours ago, Erling G-P said:

Very interesting thread, not least the Hubble colour comparison.  Decided to see if I could get my own, recent M101 image to look more like Hubble's.  Here's the two - my original, 'traditionally' coloured version, and the toned down one, done by desaturation and very gentle colour curve adjustment,  just eyeballing it.   Most of the pink hints in the Ha I have in the original regrettably disappears (modded DSLR camera), but which of the two do people prefer?

 

 

The entire image is blue-dominated in the first example and is still slightly so in the second, to my eye.  I always measure my background sky colour using Photoshop's Colour Sampler eyedropper set to read 3x3 or 5x5 pixel samples. I'm aiming for parity in each colour. This really is one of the foundation stones of any image I process. I'm not sure about the second but I'll bet the first one is high in blue.

Olly

Link to post
Share on other sites
13 hours ago, ollypenrice said:

I agree, though I think that exaggerating galaxy halos and outer arms, if they are genuinely buried in the data, is the whole point of a certain kind of imaging.

By all means, but a good title or description of what the imager was trying to achieve is appreciated. (An honest label justifies pretty much any presentation.) Here's a good example of exaggeration to show Ha clouds around M31 (although the galaxy itself is a bit too bling for my taste).

Link to post
Share on other sites
57 minutes ago, Knight of Clear Skies said:

By all means, but a good title or description of what the imager was trying to achieve is appreciated. (An honest label justifies pretty much any presentation.) Here's a good example of exaggeration to show Ha clouds around M31 (although the galaxy itself is a bit too bling for my taste).

Yes, like RBA I do tend to include pointers towards any extreme processing and its purpose. He has a particular style and is very skilled but he certainly doesn't play the shrinking violet!

Olly

Link to post
Share on other sites

Well--I feel this post has really helped me.  Thanks Vlaiv for shining the light of truth (a light of truth...as truth has many filters!) on the concept of palette for galaxies.  Here is a final image.  i made a few adjustments to bring out blue regions without increasing saturation,, decreased background star saturation a bit, and tried to smooth out the background without using noise reduction (very slight impact if any).  

Image05final2.thumb.jpg.88e24f0ad11c37afaea1a9ed54471f7c.jpg

  • Like 2
Link to post
Share on other sites
6 hours ago, ollypenrice said:

The entire image is blue-dominated in the first example and is still slightly so in the second, to my eye.  I always measure my background sky colour using Photoshop's Colour Sampler eyedropper set to read 3x3 or 5x5 pixel samples. I'm aiming for parity in each colour. This really is one of the foundation stones of any image I process. I'm not sure about the second but I'll bet the first one is high in blue.

Olly

Thanks for the comment and advice.  Not using Photoshop myself, so would have to find a different way to do this.  Don't know if AstroArt has a similar function I could use.

Link to post
Share on other sites
1 hour ago, Erling G-P said:

Thanks for the comment and advice.  Not using Photoshop myself, so would have to find a different way to do this.  Don't know if AstroArt has a similar function I could use.

What you can do in AA is go to Colour - Split Channels and then mouse over a region of background sky in each channel. The ADU is shown at the bottom of the screen. This allows you to measure the background in each channel. Unfortunately this is measuring individual pixels rather than a 3x3 or 5x5 average as in Ps so you get some variation. Still, it does give you an idea. It may be possible to draw a box and get the average within it, I don't know. It may also be possible that the free GIMP has an equivalent of the Ps sampler. I do advise finding some way of doing it, though. When the background is right the rest becomes a lot easier.

Olly

Edit, I ran a screen grab of your first image through the Ps Colour sampler system I use and here's the result:

509633338_bluesky.thumb.JPG.fc6bb84ebbafaecdf381b18b38d01560.JPG

You'll see the 4 sample points on the image and the RGB values in the lower four boxes of the Info window. Red and green are well balanced but very low and blue is about twice as bright, or almost, depending on where you look. I aim for at least 20/20/20 though I'd prefer 23/23/23 when my data allow it. Sometimes they just don't.

Only the colour balance is holding your image back. It's a real cracker in other respects.

Edited by ollypenrice
Link to post
Share on other sites

An excellent thread and a very thought provoking discussion on what colour galaxies should really be. I must admit I am always disappointed in the output from APP’s ‘calibrate star colours’ tool, my images always look washed out after applying it and there is not enough blue in the spiral arms for my liking, but as @vlaiv has pointed out, I’m basing this on the many vibrant renditions I have seen posted here and elsewhere by imagers way more accomplished in the art than I am. I would always tend towards the rigorous scientific approach, but could I get used to the bland colour palette? I’m not so sure. Maybe I’ll just capture twice as much L data and stick to monochrome.

I thought capturing the data was pretty fraught, but that’s looking like the easy part.

  • Like 1
Link to post
Share on other sites
5 minutes ago, tomato said:

but could I get used to the bland colour palette?

Why would you? We don't have any scientific responsibility (whatever that is), so can very much do whatever we like.

5 minutes ago, tomato said:

I thought capturing the data was pretty fraught, but that’s looking like the easy part.

It's the more mechanical part of AP. Processing is the artistic part.

Edited by wimvb
Link to post
Share on other sites
4 minutes ago, tomato said:

An excellent thread and a very thought provoking discussion on what colour galaxies should really be. I must admit I am always disappointed in the output from APP’s ‘calibrate star colours’ tool, my images always look washed out after applying it and there is not enough blue in the spiral arms for my liking, but as @vlaiv has pointed out, I’m basing this on the many vibrant renditions I have seen posted here and elsewhere by imagers way more accomplished in the art than I am. I would always tend towards the rigorous scientific approach, but could I get used to the bland colour palette? I’m not so sure. Maybe I’ll just capture twice as much L data and stick to monochrome.

I thought capturing the data was pretty fraught, but that’s looking like the easy part.

I'm planing to do a little demonstration as soon as I purchase needed kit to do it. I guess that this will help people understand what is going on.

In my quest for color calibration - one of options out there is to do relative color calibration. I'm currently developing algorithmic approach for above problem of narrow range of stars - since star calibration failed as it only uses very small subset of possible stars. I'm working now on doing synthetic stars in range 2000 - 50000K. That should be plenty of range.

In any case - relative color calibration depends on already calibrated source - like good DSLR camera, color passport (that does not need to be calibrated one for this purpose - just printed good range of colors will do) and regular lens. That is the part of kit that I'm missing at the moment - a way to mount regular lens to my ASI1600.

I want to show what "regular" processing that we use in astrophotography does to normal colors and daytime photography - by providing side by side shot - one with DSLR in common lighting conditions (summer is coming and I was thinking regular sunlight as source of light) and with astro camera and rgb filters.

That way we can compare how accurate colors are when you shoot them the way we are shooting astrophotography.

I wonder if anyone did this and posted results online? I have not searched to see if I can find this. Maybe someone already having this kit could give it a go?

  • Like 1
Link to post
Share on other sites
8 minutes ago, wimvb said:

Why would you? We don't have any scientific responsibility (whatever that is), so can very much do whatever we like.

It's the more mechanical part of AP. Processing is the artistic part.

Personally I would like to get my colours to be close to what we would see if we had eyes which had evolved to allow us to see colour in deep sky objects. (Perhaps if the females of our species greatly preferred males who could do this! That's not so implausible. Such males would waste less money on otherwise unproductive astrophotography. 🤣)  We'll always have to accept that it's difficult to know what colour we would see if we had such eyes but the science does give us some clues and I think there has been a collective movement towards the intensification of blue in spiral arms. I'm going to get out of my blue period and accept less colourful galaxies for a while!

Olly

  • Like 1
Link to post
Share on other sites
30 minutes ago, ollypenrice said:

What you can do in AA is go to Colour - Split Channels and then mouse over a region of background sky in each channel. The ADU is shown at the bottom of the screen. This allows you to measure the background in each channel. Unfortunately this is measuring individual pixels rather than a 3x3 or 5x5 average as in Ps so you get some variation. Still, it does give you an idea. It may be possible to draw a box and get the average within it, I don't know. It may also be possible that the free GIMP has an equivalent of the Ps sampler. I do advise finding some way of doing it, though. When the background is right the rest becomes a lot easier.

Olly

Edit, I ran a screen grab of your first image through the Ps Colour sampler system I use and here's the result:

509633338_bluesky.thumb.JPG.fc6bb84ebbafaecdf381b18b38d01560.JPG

You'll see the 4 sample points on the image and the RGB values in the lower four boxes of the Info window. Red and green are well balanced but very low and blue is about twice as bright, or almost, depending on where you look. I aim for at least 20/20/20 though I'd prefer 23/23/23 when my data allow it. Sometimes they just don't.

Only the colour balance is holding your image back. It's a real cracker in other respects.

Thanks a bunch Olly; that is very interesting indeed.   Must see what I can do with AstroArt in regards to a similar analysis.

  • Like 1
Link to post
Share on other sites
29 minutes ago, ollypenrice said:

Personally I would like to get my colours to be close to what we would see if we had eyes which had evolved to allow us to see colour in deep sky objects. (Perhaps if the females of our species greatly preferred males who could do this! That's not so implausible. Such males would waste less money on otherwise unproductive astrophotography. 🤣)  We'll always have to accept that it's difficult to know what colour we would see if we had such eyes but the science does give us some clues and I think there has been a collective movement towards the intensification of blue in spiral arms. I'm going to get out of my blue period and accept less colourful galaxies for a while!

Olly

Intensified blue in spiral arms has roots in science actually. There are two reasons for too much blue in images.

First has to do with science - at some point, it was very interesting to scientists to show certain features of galaxies emphasized. Like we do with Ha layer on top of our images. Star forming regions contained hot young stars and for galaxy formation and evolution science it was very important to distinguish galaxies that have star forming regions to those that don't. Boosting blue emphasizes those regions because of hot young stars that have bluish light.

Second reason has to do with quantum mechanic :D. If you look at response curves of modern sensors - they all more or less have following feature - green part of spectrum is most sensitive and blue and red ends of spectrum are less. Sometimes blue is a bit more sensitive then red and more often - it is red part of spectrum that is more sensitive than blue.

Fact that green is the most sensitive of the three - plays into our hands. Human vision is such that we perceive brightness from green color the most.

For the sake of argument we could say that blue is about as sensitive as red (fact that red is often more sensitive will just add up later). Because of the way cameras work - we actually need to boost blue to get camera signal close to what we see. We define color in terms of energy and cameras are photon counting devices. Photon energy depends on wavelength - and blue photons have higher energy than red photons. There will be more red photons for same energy total than blue photons. If we have same energy light source (one that shines equally in all frequencies of visible spectrum - illuminant E), our cameras will gather more red then blue photons. In order to compensate that - we need to boost blue component.

  • Like 2
Link to post
Share on other sites

Finally managed to do somewhat ok color calibration, only problem is - I don't know if it is correct :D

It does look correct(ish). I'm going to explain details in separate thread that I'm going to link to. For now, just two images here - one gamma corrected color and one linear rgb (essentially wrong but closer to what are used to see).

Linear RGB (wrong):

linear.thumb.png.240ebe3a7694f1b998f2b07dce7360ee.png

sRGB gamma applied - correct color rendition:

srgb.thumb.png.82474c430fd315ec678932deecc91058.png

This calibration was performed on stars (or rather ideal black body emitters) with temperatures in range 2000K-50000K (200K interval - total of 240 calibration samples).

Some assumptions that might not be true:

- Rodd's telescope does not provide "tone" of it's own - glass is completely neutral

- ASI1600 QE graph published by ZWO is correct

- Filters used were Baader RGB filters and their published transmission graphs are correct.

- Atmosphere is completely transparent - or another way to look at it: this is what image looks like when viewed thru our atmosphere without correction

  • Like 1
Link to post
Share on other sites
6 minutes ago, vlaiv said:

his calibration was performed on stars (or rather ideal black body emitters) with temperatures in range 2000K-50000K (200K interval - total of 240 calibration samples).

Some assumptions that might not be true:

It just struck me...there is color balance, and then there is color intensity, or amount of chroma.  The color balance can be the same in you last image above and the same image with the colors multiplied by 100...as long as the balance is the same.  So, I think I was more referring to color intensity as opposed to color balance.  After all--structures visible with the naked eye can be very faint and indistinct, whereas a CCD/CMOS camera with hours of integration will depict the same structures but firmer, more contrast, higher dynamic range.  So to with colors--the human eye wont pick up much, but after 20 hours of exposure, the camera will pick up much more.  So, you have targeted color balance--which seems right.  Now, please target color intensity, for that is what I struggle with.  I accept the color calibration of the software I use--it is generally considered pretty good--but the color intensity is another matter.

Link to post
Share on other sites

Here is my attempt at the  "Hubble Palette", using my Esprit 150/ASI 178 data, about 19 hrs worth. My starting point was using the CSC tool in APP, then no science I'm afraid, just trying to reproduce the HST colour range. Apologies for the horrible background.

I find this easier to replicate than the traditional blue tones.

combine-RGB-image-mod-lpc-cbg-St.thumb.png.ffeed1997f5f0b79914693c5f719ae0c.png

Link to post
Share on other sites
8 minutes ago, Rodd said:

It just struck me...there is color balance, and then there is color intensity, or amount of chroma.  The color balance can be the same in you last image above and the same image with the colors multiplied by 100...as long as the balance is the same.  So, I think I was more referring to color intensity as opposed to color balance.  After all--structures visible with the naked eye can be very faint and indistinct, whereas a CCD/CMOS camera with hours of integration will depict the same structures but firmer, more contrast, higher dynamic range.  So to with colors--the human eye wont pick up much, but after 20 hours of exposure, the camera will pick up much more.  So, you have targeted color balance--which seems right.  Now, please target color intensity, for that is what I struggle with.  I accept the color calibration of the software I use--it is generally considered pretty good--but the color intensity is another matter.

With color matching "color intensity" as you put it - means different color. It is not light intensity - meaning intensity of the light making up color. It is about proper RGB ratio regardless of intensity. Intensity is dictated by luminance. It is important that ratio stays the same.

Look what happens when you keep RGB ratio the same, just change intensity:

image.png.a738365211e0fa259912c1950f42d55b.png

It is the same color - darker or lighter - same color

Btw - I just realized I messed up above :D

I did not account for photon energy in my code. Back to the drawing board.

 

Link to post
Share on other sites
Just now, vlaiv said:

With color matching "color intensity" as you put it - means different color. It is not light intensity - meaning intensity of the light making up color. It is about proper RGB ratio regardless of intensity. Intensity is dictated by luminance. It is important that ratio stays the same.

Look what happens when you keep RGB ratio the same, just change intensity:

image.png.a738365211e0fa259912c1950f42d55b.png

It is the same color - darker or lighter - same color

Btw - I just realized I messed up above :D

I did not account for photon energy in my code. Back to the drawing board.

 

I know--that is what I am saying.  If I process an image and get the perfect color balance I may still not have the proper color intensity.  The valance may be perfect but if teh amounts of RGB are too low you wont see any color.

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    No registered users viewing this page.

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.