Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

The “two” andromeda galaxies?


Recommended Posts

Okay, so I have noticed that whenever I see pictures of andromeda, that people have taken, they fall into two distinctive looks. The first has a more muted, yellow-orange look to it. The second adds beautiful blues, with these stunning jewel-like reddish/pink starts dotted throughout.

1. Can anyone explain the difference in the images?

2. Is it purely down to image capture?

3. Or down to editing?

4. What set up would you need for each ( I.e the key differences in each rig)?

5. And which you like best (obviously highly subjective!)?

 

 

[these stunning images are NOT my own and I take no credit or ownership over them. They are merely here for me to illustrate the artistic differences between each capture. The first is from SGL user “frugal.” I found the second image downloaded on my phone, so the photographer is unknown. But it is a fantastic image. If I find out, I will add the name in]

A950D86F-78E8-4949-967B-269022FBEFDA.png

D948E8FC-888C-4D7F-96A3-98EB0D401173.jpeg

Link to comment
Share on other sites

1. color balance

2. no, image capture has almost nothing to do with that

3. yes, "editing" - or rather data processing has almost everything to do with why they are different

4. depends on "raw" camera sensitivity - each camera will have different relative sensitivity between color components. There is a bit down to sky conditions - levels of LP and type of LP

5. I think I like the best scientifically accurate color version (almost nobody processes their images like that, but if you find image that is done in PixInsight and author says they used stellar color calibration  - then it should be close to proper color).

Here is quick manipulation of second image to make it look more like first image:

image.png.bfd33c5b9be98adc0d50291ebe6c000a.png

  • Like 1
Link to comment
Share on other sites

I think I could also turn image 2 into something resembling image 1 very quickly but I wouldn’t be confident trying to do the reverse (trying to make 1 look like 2). 

I also find it interesting that a lot of people go for the second style, and yet I haven’t seen a great tutorial yet on how to achieve that look. 

Link to comment
Share on other sites

As @vlaiv wrote, camera sensitivity determines a basic balance in the raw data, as do sky conditions. But all this can be, and is, adjusted during post processing. To get to image 2, I would use an LRGB workflow, even if the data is from an osc camera. In my experience you have to push the colour quite hard to get that level of saturation. 

Link to comment
Share on other sites

The astrophysics tells us that the the stars of the central bulge are what Walter Baader called 'Population 2,' so they tend to be older, cooler and redder. The spiral arms are home to the hot young blue giants and the HII regions of star formation which are rich in ionized hydrogen. This would square with a reddish core, bluish spirals and patches of red for the star-forming regions. This is pretty much what I get when I perform a colour calibration of my RGB data. The red star forming regions are not particularly striking in a straightforward red-green-blue capture. They reveal themselves when we image M31 through an Ha filter which blocks everything but the light from ionized hydrogen. The addition of Ha does not invent anything which isn't there, it simply gives it preferential emphasis. This is one way in which M31 images will certainly vary. In mine I used the Ha sparingly but others may be more lavish with it. 

When it comes to what precise shades of blue and red are the truest, it gets very difficult. We're used to the idea that faint parts of an object need lots of signal in order to appear at all. The same is true for objects naturally weak in colour intensity. The colours will only distinguish themselves when you have a lot of data. I'd feel happier if my blues had come out less cyan but they didn't and so I just accept them as they are. A friend using the same makes of scope, camera and filters got the same result independently.

spacer.png

We do have other sources of information on any picture we look at.  Is the sky a neutral dark grey? It should be. Are the stars showing convincing blue or red colours as is appropriate? (It's easy to look up the colour index of a few bright test stars.)

Olly

Edited by ollypenrice
  • Like 6
Link to comment
Share on other sites

50 minutes ago, ollypenrice said:

We do have other sources of information on any picture we look at.  Is the sky a neutral dark grey? It should be. Are the stars showing convincing blue or red colours as is appropriate? (It's easy to look up the colour index of a few bright test stars.)

Olly

You were faster on the keyboard.

An image where a reflector was used, may show more/stronger colour variation in the stars than an image taken with a good refractor. This is because reflectors push more light in the star halo than a colour corrected refractor. And for some reason it seems that images where Astrodon RGB filters were used often have more vibrant colours than images taken with other filters. Astrodon filters have a much wider rejected wavelength region between green and red than other brands. These filters create much deeper reds.

It's either a characteristic of the filters or of their proud owners. Or maybe it's just me.

Link to comment
Share on other sites

6 minutes ago, wimvb said:

You were faster on the keyboard.

An image where a reflector was used, may show more/stronger colour variation in the stars than an image taken with a good refractor. This is because reflectors push more light in the star halo than a colour corrected refractor. And for some reason it seems that images where Astrodon RGB filters were used often have more vibrant colours than images taken with other filters. Astrodon filters have a much wider rejected wavelength region between green and red than other brands. These filters create much deeper reds.

It's either a characteristic of the filters or of their proud owners. Or maybe it's just me.

Why would reflector push more light in star halo and how would that impact color of the star? Why would filter be responsible for "deepness" of the red - it either passes or blocks wavelengths and some signal is recorded. It is up to processing workflow to actually assign "color meaning" to that recorded signal.

Link to comment
Share on other sites

22 minutes ago, vlaiv said:

Why would reflector push more light in star halo and how would that impact color of the star? 

The central obstruction causes diffraction, and so may any mirror clips/ focus tube intrusion. Colour is stronger in this halo because the brightness is less. At high brightness, colours are less saturated.

29 minutes ago, vlaiv said:

Why would filter be responsible for "deepness" of the red - it either passes or blocks wavelengths and some signal is recorded. It is up to processing workflow to actually assign "color meaning" to that recorded signal.

I understand your argument. Maybe it's the blocking action of the filter that separates colours better, giving a "cleaner" image.  If there is a gap between the passband of colour filters, as there is betwen the green and red Astrodon filters, than obviously those colours that are blocked by both will never be registered. Basically you exclude that colour from the final image. And also, if a blue filter extends further towards the ultraviolet than another, it will register more signal and show more "blue" , because during processiing, the near UV values are mapped to blue. This doesn't take into account the camera's spectral sensitivity of course. That's my take on it, but I may be wrong.

I just noticed it on images (galaxies mainly) on Astrobin. hence the:

39 minutes ago, wimvb said:

It's either a characteristic of the filters or of their proud owners. Or maybe it's just me.

 

Link to comment
Share on other sites

53 minutes ago, wimvb said:

The central obstruction causes diffraction, and so may any mirror clips/ focus tube intrusion. Colour is stronger in this halo because the brightness is less. At high brightness, colours are less saturated.

Again - that is up to processing. Actual brightness of something in image hugely depends on level of stretch - much more so than few percent possible difference due to scattering - and in reality it's not even that much.

I see the problem being as follows:

- most people don't bother with color calibration of their gear - they just compose R, G and B to be color and don't understand properties of raw and gamma corrected color spaces. Most even do color composing wrong - they compose RGB while in linear stage and then apply same level of stretch to those colors. This all leads to lower saturation. Then they need to increase saturation to bring it back.

- people that are good at this tend to do it longer and have higher end gear - either because they are ready to invest more into this, or due to length of time in this hobby - they accumulated enough valuable gear. But with time you gain experience as well and that means that you get good at making images. Point being - you don't color calibrate, you use high end gear (like mentioned Astrodon filters) - that impact certain "cast" to the image if you don't do proper color calibration - you make good images that people tend to take as reference work. This leads to setting "skewed" standard of what certain galaxy should look like in images. Over time less experienced imagers "grow up" trying to match popular belief of what certain galaxy should look like.

But in reality we have just one way galaxy really looks like - since we are not dealing with colorful objects that you can take in daylight or under led lighting and can therefore change colors - we have light sources that always emit same spectrum - they have very defined colors (take same light and boost it enough to produce color response by human eye - and it will have definite colors that most observers will agree upon - there is no "artistic freedom" in that and choice how to render object - there is, but we can classify it broadly in two categories - right and wrong :D ).

 

  • Like 1
Link to comment
Share on other sites

3 hours ago, vlaiv said:

Most even do color composing wrong - they compose RGB while in linear stage and then apply same level of stretch to those colors. This all leads to lower saturation. Then they need to increase saturation to bring it back.

 

That would be me! In fact everyone I know does it that way, so far as I'm aware. I've generally taken the view that it ensures an equivalent stretch on all channels, though the black points may need a small adjustment. I never attempt complex custom stretches using Curves in RGB, it's always a bog standard log stretch in Levels, so I guess a log stretch could be done on the individual channels without creating imbalances up the brightness range.

I'm very interested in your point about losing saturation. I do find, sometimes, that good solid integration times are producing very thin colour. Why does stretching the channels individually increase saturation? And how would you go about it? I'm guessing but one could give each channel a log stretch until the background sky reached 'value x' and then combine them.

Tell us more!

Olly

Link to comment
Share on other sites

7 minutes ago, ollypenrice said:

That would be me! In fact everyone I know does it that way, so far as I'm aware. I've generally taken the view that it ensures an equivalent stretch on all channels, though the black points may need a small adjustment. I never attempt complex custom stretches using Curves in RGB, it's always a bog standard log stretch in Levels, so I guess a log stretch could be done on the individual channels without creating imbalances up the brightness range.

I'm very interested in your point about losing saturation. I do find, sometimes, that good solid integration times are producing very thin colour. Why does stretching the channels individually increase saturation? And how would you go about it? I'm guessing but one could give each channel a log stretch until the background sky reached 'value x' and then combine them.

Tell us more!

Olly

It is due to non linearity of histogram stretch.

Imagine we have a very simplified setup - source of light that produces 10 parts red for one part green light (matched to our camera - like I said very simplified setup).

In one second our camera will record 10 units of red and 1 unit of green. In 100 seconds, we will record 1000 units of red, and 10 units of green. Even if light source is very far away - so that we register only 1/1000th of light in that arrangement - we will still record same R:G ratio of 10:1 - in one second exposure or in 50 second exposure.

It is important to see that RGB ratio does not change for particular target if you change the intensity of light (either by star being further away, or using smaller scope, or using shorter exposure - whatever the reason).

Now let's examine stretch curve that is non linear:

image.png.9a73e80057409b70a03549007a2a2853.png

Above graph represents non linear histogram stretch - it takes pixel intensity value from X axis and maps it to Y axis. I also added - pure linear transform (a bit less than 45 degrees - so it actually changes pixel values). Let's examine what happens to original pixel values and transformed pixel values.

I tried to maintain 10:1 ratio mentioned previously - so on X axis we have two vertical lines representing two pixel values mapped on X axis - one is red and one is green. Red is about 10 times more than green (x10 further from origin). Then we have two horizontal lines for each of these vertical lines - one that represents linear mapping - vertical line goes to diagonal and then joins dotted horizontal line and vertical line goes to our curve on histogram - non linear stretch and then it joins regular horizontal line.

If you look where dotted red and green horizontal lines intersect Y axis - you can see that Y values maintain 10:1 ratio (triangle similarity if my drawing is poor) - but look what sort of ratio have Y values that are result of non linear stretch - full horizontal red and green lines Y intersections. It is more like 3:1 now and no longer 10:1.

Doing same nonlinear stretch on all channels at the same time will change color ratios. In fact it will change them in such way that will "bring them closer together" - closer to 1:1 ratio. 1:1:1 ratio of RGB is gray color - so whenever you bring colors closer to that ratio - reduce difference between them, you are loosing saturation.

How do you keep color ratio then in your processing workflow?

That is quite easy - you need luminance - and you stretch only luminance.

For LRGB - you have luminance. For RGB you can either have synthetic luminance or use G as luminance - depending on target and type of camera used. Most DSLR cameras have G channel that is made to mimic human brightness response. If you want to get luminance as humans would see it - just use G channel of these cameras (again it is better to use G as luminance in OSC because twice as many pixels collect it than R or B - therefore it will have better SNR, but like I said that depends on target - if you shoot OIII+Ha target - you will be better off by just adding R+B).

Once you have stretched luminance - you multiply with scaled R, G and B to get actual RGB values.

For example, let's go with above 10:1. First we scale it. We scale it so that Rscaled = R/max(R,G,B), Gscaled = G/max(R,G,B), Bscaled = B/max(R,G,B).

max(10, 1, never mind now, we are only using r and g) = 10.

R = 10 / 10 = 1

G = 1 / 10 = 0.1

Now if we have luminance boosted by stretch to 0.5 - actual pixel values will be R=0.5, G = 0.05. If we have luminance boosted to 0.8 then R = 0.8, G = 0.08. If luminance is stretched to saturation point =1, then R=1 and G=0.1 - we still have proper ratio. This means that even star cores will not be white but will have proper color - common problem when using regular stretch - star cores saturate really quickly and become white.

Above I outlined workflow that preserves color ratios - this is one part of getting proper color in image. Other parts include - doing color transform on raw color data (often called color balance), and later - doing gamma 2.2 on linear color data - because images unless color managed are expected to be in sRGB standard. Part of transform from raw/linear RGB values to sRGB standard is to apply gamma of 2.2 in the end (in fact gamma 2.2 is approximation, there is precise expression that is a bit more mathematically involved - but for nice image, gamma 2.2 is sufficient).

 

  • Like 1
Link to comment
Share on other sites

7 hours ago, ollypenrice said:

The astrophysics tells us that the the stars of the central bulge are what Walter Baader called 'Population 2,' so they tend to be older, cooler and redder. The spiral arms are home to the hot young blue giants and the HII regions of star formation which are rich in ionized hydrogen. This would square with a reddish core, bluish spirals and patches of red for the star-forming regions. This is pretty much what I get when I perform a colour calibration of my RGB data. The red star forming regions are not particularly striking in a straightforward red-green-blue capture. They reveal themselves when we image M31 through an Ha filter which blocks everything but the light from ionized hydrogen. The addition of Ha does not invent anything which isn't there, it simply gives it preferential emphasis. This is one way in which M31 images will certainly vary. In mine I used the Ha sparingly but others may be more lavish with it. 

When it comes to what precise shades of blue and red are the truest, it gets very difficult. We're used to the idea that faint parts of an object need lots of signal in order to appear at all. The same is true for objects naturally weak in colour intensity. The colours will only distinguish themselves when you have a lot of data. I'd feel happier if my blues had come out less cyan but they didn't and so I just accept them as they are. A friend using the same makes of scope, camera and filters got the same result independently.

spacer.png

We do have other sources of information on any picture we look at.  Is the sky a neutral dark grey? It should be. Are the stars showing convincing blue or red colours as is appropriate? (It's easy to look up the colour index of a few bright test stars.)

Olly

Can we just take a moment to appreciate how gorgeous this image is 😂 it is pretty much my idea of imaging perfection. In terms of the Ha you are mentioning? Do you mean the bits like I have circled in my quick screenshot? Those are my favourite bits of the image. They bring it to life. Am I understanding you correctly, that those red regions are star forming regions?

93F8A9E3-ED8F-499F-A7B6-453C86E6F7C8.jpeg

  • Like 1
Link to comment
Share on other sites

1 hour ago, willcastle said:

Can we just take a moment to appreciate how gorgeous this image is 😂 it is pretty much my idea of imaging perfection. In terms of the Ha you are mentioning? Do you mean the bits like I have circled in my quick screenshot? Those are my favourite bits of the image. They bring it to life. Am I understanding you correctly, that those red regions are star forming regions?

93F8A9E3-ED8F-499F-A7B6-453C86E6F7C8.jpeg

Yes, and emphasized by a very gentle application of an Ha layer used to lighten the red channel. Thanks for your kind comments. On this image I worked mostly on trying to find structure close to the core and on trying to reveal the faintest outer parts of the galaxy.

Olly

  • Like 1
Link to comment
Share on other sites

3 hours ago, ollypenrice said:

Yes, and emphasized by a very gentle application of an Ha layer used to lighten the red channel. Thanks for your kind comments. On this image I worked mostly on trying to find structure close to the core and on trying to reveal the faintest outer parts of the galaxy.

Olly

It looks astonishing. It’s mesmerising. So much detail throughout. I would love to know what scope and camera you used. I’m using a Dslr for the foreseeable future. I noticed you can buy a clip-in Ha filter. I wonder if that would allow me to improve my own images. I am still very new to deep sky photography however, so maybe I am getting a head of myself a bit 😅

Link to comment
Share on other sites

9 hours ago, willcastle said:

It looks astonishing. It’s mesmerising. So much detail throughout. I would love to know what scope and camera you used. I’m using a Dslr for the foreseeable future. I noticed you can buy a clip-in Ha filter. I wonder if that would allow me to improve my own images. I am still very new to deep sky photography however, so maybe I am getting a head of myself a bit 😅

Takahashi FSQ106N/Atik 11000 mono CCD camera/Mesu 200 mount for most of the image. The innermost part of the core was enhanced using shorter sub exposures from a TEC140 with the same camera. It's not a high resolution image. For that you should check out Jonas Grinde's huge mosaic or Pieter Vandevelde's recent monochrome rendition.

An Ha filter will only work if your DSLR has been modified. The standard in-camera filters block the red channel at wavelengths just short of the Ha line. The modification involves removing and sometimes replacing this cut-off filter. Then a clip in filter will pass only the Ha line and it can be recorded by your red-filtered pixels (ie a quarter of the total). The blue and green filtered pixels will remain blind to Ha but still the result can be well worth having.

Olly

  • Like 1
Link to comment
Share on other sites

15 hours ago, vlaiv said:

It is due to non linearity of histogram stretch.

Imagine we have a very simplified setup - source of light that produces 10 parts red for one part green light (matched to our camera - like I said very simplified setup).

In one second our camera will record 10 units of red and 1 unit of green. In 100 seconds, we will record 1000 units of red, and 10 units of green. Even if light source is very far away - so that we register only 1/1000th of light in that arrangement - we will still record same R:G ratio of 10:1 - in one second exposure or in 50 second exposure.

It is important to see that RGB ratio does not change for particular target if you change the intensity of light (either by star being further away, or using smaller scope, or using shorter exposure - whatever the reason).

Now let's examine stretch curve that is non linear:

image.png.9a73e80057409b70a03549007a2a2853.png

Above graph represents non linear histogram stretch - it takes pixel intensity value from X axis and maps it to Y axis. I also added - pure linear transform (a bit less than 45 degrees - so it actually changes pixel values). Let's examine what happens to original pixel values and transformed pixel values.

I tried to maintain 10:1 ratio mentioned previously - so on X axis we have two vertical lines representing two pixel values mapped on X axis - one is red and one is green. Red is about 10 times more than green (x10 further from origin). Then we have two horizontal lines for each of these vertical lines - one that represents linear mapping - vertical line goes to diagonal and then joins dotted horizontal line and vertical line goes to our curve on histogram - non linear stretch and then it joins regular horizontal line.

If you look where dotted red and green horizontal lines intersect Y axis - you can see that Y values maintain 10:1 ratio (triangle similarity if my drawing is poor) - but look what sort of ratio have Y values that are result of non linear stretch - full horizontal red and green lines Y intersections. It is more like 3:1 now and no longer 10:1.

Doing same nonlinear stretch on all channels at the same time will change color ratios. In fact it will change them in such way that will "bring them closer together" - closer to 1:1 ratio. 1:1:1 ratio of RGB is gray color - so whenever you bring colors closer to that ratio - reduce difference between them, you are loosing saturation.

How do you keep color ratio then in your processing workflow?

That is quite easy - you need luminance - and you stretch only luminance.

For LRGB - you have luminance. For RGB you can either have synthetic luminance or use G as luminance - depending on target and type of camera used. Most DSLR cameras have G channel that is made to mimic human brightness response. If you want to get luminance as humans would see it - just use G channel of these cameras (again it is better to use G as luminance in OSC because twice as many pixels collect it than R or B - therefore it will have better SNR, but like I said that depends on target - if you shoot OIII+Ha target - you will be better off by just adding R+B).

Once you have stretched luminance - you multiply with scaled R, G and B to get actual RGB values.

For example, let's go with above 10:1. First we scale it. We scale it so that Rscaled = R/max(R,G,B), Gscaled = G/max(R,G,B), Bscaled = B/max(R,G,B).

max(10, 1, never mind now, we are only using r and g) = 10.

R = 10 / 10 = 1

G = 1 / 10 = 0.1

Now if we have luminance boosted by stretch to 0.5 - actual pixel values will be R=0.5, G = 0.05. If we have luminance boosted to 0.8 then R = 0.8, G = 0.08. If luminance is stretched to saturation point =1, then R=1 and G=0.1 - we still have proper ratio. This means that even star cores will not be white but will have proper color - common problem when using regular stretch - star cores saturate really quickly and become white.

Above I outlined workflow that preserves color ratios - this is one part of getting proper color in image. Other parts include - doing color transform on raw color data (often called color balance), and later - doing gamma 2.2 on linear color data - because images unless color managed are expected to be in sRGB standard. Part of transform from raw/linear RGB values to sRGB standard is to apply gamma of 2.2 in the end (in fact gamma 2.2 is approximation, there is precise expression that is a bit more mathematically involved - but for nice image, gamma 2.2 is sufficient).

 

Thanks Vlaiv. I'll go through all this over the weekend and give the separate stretching a try.

Olly

Link to comment
Share on other sites

59 minutes ago, ollypenrice said:

Thanks Vlaiv. I'll go through all this over the weekend and give the separate stretching a try.

Olly

 

16 hours ago, vlaiv said:

It is due to non linearity of histogram stretch.

Imagine we have a very simplified setup - source of light that produces 10 parts red for one part green light (matched to our camera - like I said very simplified setup).

In one second our camera will record 10 units of red and 1 unit of green. In 100 seconds, we will record 1000 units of red, and 10 units of green. Even if light source is very far away - so that we register only 1/1000th of light in that arrangement - we will still record same R:G ratio of 10:1 - in one second exposure or in 50 second exposure.

It is important to see that RGB ratio does not change for particular target if you change the intensity of light (either by star being further away, or using smaller scope, or using shorter exposure - whatever the reason).

Now let's examine stretch curve that is non linear:

image.png.9a73e80057409b70a03549007a2a2853.png

Above graph represents non linear histogram stretch - it takes pixel intensity value from X axis and maps it to Y axis. I also added - pure linear transform (a bit less than 45 degrees - so it actually changes pixel values). Let's examine what happens to original pixel values and transformed pixel values.

I tried to maintain 10:1 ratio mentioned previously - so on X axis we have two vertical lines representing two pixel values mapped on X axis - one is red and one is green. Red is about 10 times more than green (x10 further from origin). Then we have two horizontal lines for each of these vertical lines - one that represents linear mapping - vertical line goes to diagonal and then joins dotted horizontal line and vertical line goes to our curve on histogram - non linear stretch and then it joins regular horizontal line.

If you look where dotted red and green horizontal lines intersect Y axis - you can see that Y values maintain 10:1 ratio (triangle similarity if my drawing is poor) - but look what sort of ratio have Y values that are result of non linear stretch - full horizontal red and green lines Y intersections. It is more like 3:1 now and no longer 10:1.

Doing same nonlinear stretch on all channels at the same time will change color ratios. In fact it will change them in such way that will "bring them closer together" - closer to 1:1 ratio. 1:1:1 ratio of RGB is gray color - so whenever you bring colors closer to that ratio - reduce difference between them, you are loosing saturation.

How do you keep color ratio then in your processing workflow?

That is quite easy - you need luminance - and you stretch only luminance.

For LRGB - you have luminance. For RGB you can either have synthetic luminance or use G as luminance - depending on target and type of camera used. Most DSLR cameras have G channel that is made to mimic human brightness response. If you want to get luminance as humans would see it - just use G channel of these cameras (again it is better to use G as luminance in OSC because twice as many pixels collect it than R or B - therefore it will have better SNR, but like I said that depends on target - if you shoot OIII+Ha target - you will be better off by just adding R+B).

Once you have stretched luminance - you multiply with scaled R, G and B to get actual RGB values.

For example, let's go with above 10:1. First we scale it. We scale it so that Rscaled = R/max(R,G,B), Gscaled = G/max(R,G,B), Bscaled = B/max(R,G,B).

max(10, 1, never mind now, we are only using r and g) = 10.

R = 10 / 10 = 1

G = 1 / 10 = 0.1

Now if we have luminance boosted by stretch to 0.5 - actual pixel values will be R=0.5, G = 0.05. If we have luminance boosted to 0.8 then R = 0.8, G = 0.08. If luminance is stretched to saturation point =1, then R=1 and G=0.1 - we still have proper ratio. This means that even star cores will not be white but will have proper color - common problem when using regular stretch - star cores saturate really quickly and become white.

Above I outlined workflow that preserves color ratios - this is one part of getting proper color in image. Other parts include - doing color transform on raw color data (often called color balance), and later - doing gamma 2.2 on linear color data - because images unless color managed are expected to be in sRGB standard. Part of transform from raw/linear RGB values to sRGB standard is to apply gamma of 2.2 in the end (in fact gamma 2.2 is approximation, there is precise expression that is a bit more mathematically involved - but for nice image, gamma 2.2 is sufficient).

 

Vlaiv, Olly

I don't know for sure but isn't part of what Vlaiv is describing similar (or has the same effect as)  ArcsinhStretch in Pixinsight., ie preserves colour ratio as you stretch

Dave

Link to comment
Share on other sites

1 hour ago, Laurin Dave said:

 

Vlaiv, Olly

I don't know for sure but isn't part of what Vlaiv is describing similar (or has the same effect as)  ArcsinhStretch in Pixinsight., ie preserves colour ratio as you stretch

Dave

I haven't tried this in PI. More homework!

Olly

Link to comment
Share on other sites

1 hour ago, ollypenrice said:

I haven't tried this in PI. More homework!

Olly

Mark Shelley developed arcsinh stretch also for PS, afaIk. Unless you want complete your journey to the darkest side. :evil62:

Edited by wimvb
Link to comment
Share on other sites

2 hours ago, Laurin Dave said:

 

Vlaiv, Olly

I don't know for sure but isn't part of what Vlaiv is describing similar (or has the same effect as)  ArcsinhStretch in Pixinsight., ie preserves colour ratio as you stretch

Dave

Arcsinh stretch preserves colour, but I'm not sure it preserves colour ratios.

  • Thanks 1
Link to comment
Share on other sites

19 hours ago, ollypenrice said:

I'm very interested in your point about losing saturation. I do find, sometimes, that good solid integration times are producing very thin colour. Why does stretching the channels individually increase saturation? And how would you go about it? I'm guessing but one could give each channel a log stretch until the background sky reached 'value x' and then combine them.

Tell us more!

Olly

Adam Block has a tutorial on lrgb combination, which is not restricted to Pixinsight,  even though he uses it to make his point. It shows his workflow to preserve colour.

https://adamblockstudios.com/articles/Fundamentals_LRGB

19 hours ago, vlaiv said:

Doing same nonlinear stretch on all channels at the same time will change color ratios. In fact it will change them in such way that will "bring them closer together" - closer to 1:1 ratio. 1:1:1 ratio of RGB is gray color - so whenever you bring colors closer to that ratio - reduce difference between them, you are loosing saturation.

Yes, stretching will reduce colour. If you stretch hard enough, eventually any colour will turn into white, because the dominant colour will max out while the other colours can be stretched further.

 

In the process of producing pleasing images, we apply methods to balance colours , such as background neutralisation and (photometric) colour calibration, but then we use tools such as scnr or Hasta la Vista green, as well as selective colour saturation tools. The image will not be scientifically correct anymore after post processing. The processing is not only limited to saturation of colours but also to contrast. These two aspects work together to form the final image

What many of us do, the rendering of a natural scene to create an aesthetically pleasing image that will move the viewer, is often closer to art than it is to science. For me, I admire the physics that creates the subject, and the mathematics in image processing appeals to me. That's why I like Pixinsight. But the final image is no more a true rendition of reality than an impressionist's painting of a pond and trees. And while I think it is important to understand how colours are affected during various stages of processing, I do not let that limit me towards a final image.

  • Thanks 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.