Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

Am I a cheat? A question of morals.


George Gearless

Recommended Posts

"You spend hours fiddlin' about with photos and 'cheating"

"I enjoy astro-banter with my partner who often spends a freezing hour or two in the dark with me only to accuse me of 'cheating' when she sees the resulting images.  I freely admit to: upping colour saturation, removing stars or at least making them smaller, stretching nebulosity not in a good way and removing dust motes and crud etc. etc.   I sometimes like to try and capture what I perceived it looked like on the night in the dark but often I enjoy trying to get it to look like images in coffee table astro-books and magazines.  I am very interested in science but there was a reason why I opted for an Arts Degree - I do like a bright  picture in 'technicolor', I'm not good with rules and routines and although I admire the amazing images posted on this forum by dedicated and skilled imagers,  I rejoice in my lack of scientific rigour and application"!  -  Mr Toad of Toad Hall.

Link to comment
Share on other sites

  • Replies 100
  • Created
  • Last Reply
3 hours ago, carastro said:

I think it is Sara.  Agreed when you have had lots of practice and have everything sorted, you have all the spacing correct and no flex anywhere and set up permanently there is less skill in it.  But it's a huge learn in the first place. But for those who have to set up every time it is more of a challenge.  Even for me who has been doing it for nearly 10 years, it's sometimes a challenge finding a target in the first place, I frequently don't  GOTO the right place, and this is the fruits of not having everything permanently set up and not having the skills necessary to overcome cone error or being able to use Plate Solving (yet) among other things.

Then there is getting your spacing right, getting the guiding working well.  Using your judgement based on the sky conditions and the location you are imaging from, takes experience and a certain amount of know how.  Overcoming IT foibles.  

Yes I agree that good processing is a huge Skill as well and needs time to learn, but you can't make a silk purse out of a sows ear, if your original images are not good for some reason, no amount of good processing is going to fix that.

Carole 

Like Sara I don't think there is much skill in the capture stage. In the last seven years I haven't changed my capture procedure at all and don't see myself changing it in the next seven years. So I can't find that bit interesting. (I can find it expensive, though! :icon_mrgreen:) However, I have made countless changes to my processing methods and hope to continue to do so. Change and development are interesting, especially when you can't put your finger on what you've done differently. I'll look at a five year old image and think, Uh-Oh, what were you thinking of? But I won't know what it is I'd do differently now.

To be honest I'm not all that motivated by the idea of possession, either. I have this passion for the objects out there, discovering them, rendering them in good pictures.  But I think they belong to nature, not to me.

Olly

Link to comment
Share on other sites

Quote

Like Sara I don't think there is much skill in the capture stage. 

But you and Sara have permanent rigs, much easier to fine tune everything when you don't have to keep moving it all.   I guess then the imaging part would become boring.  

Carole 

Link to comment
Share on other sites

1 hour ago, ollypenrice said:

Like Sara I don't think there is much skill in the capture stage. In the last seven years I haven't changed my capture procedure at all and don't see myself changing it in the next seven years. So I can't find that bit interesting. (I can find it expensive, though! :icon_mrgreen:) However, I have made countless changes to my processing methods and hope to continue to do so. Change and development are interesting, especially when you can't put your finger on what you've done differently. I'll look at a five year old image and think, Uh-Oh, what were you thinking of? But I won't know what it is I'd do differently now.

To be honest I'm not all that motivated by the idea of possession, either. I have this passion for the objects out there, discovering them, rendering them in good pictures.  But I think they belong to nature, not to me.

Olly

Olly, you want to try cloud dodging with a portable setup under unpredictable UK skies.

I am convinced there's a lot of skill in achieving better results, as sometimes my results are much better than other times, and almost always down to my mistakes.

Link to comment
Share on other sites

1 hour ago, ollypenrice said:

Like Sara I don't think there is much skill in the capture stage. In the last seven years I haven't changed my capture procedure at all and don't see myself changing it in the next seven years. So I can't find that bit interesting. (I can find it expensive, though! :icon_mrgreen:) However, I have made countless changes to my processing methods and hope to continue to do so. Change and development are interesting, especially when you can't put your finger on what you've done differently. I'll look at a five year old image and think, Uh-Oh, what were you thinking of? But I won't know what it is I'd do differently now.

To be honest I'm not all that motivated by the idea of possession, either. I have this passion for the objects out there, discovering them, rendering them in good pictures.  But I think they belong to nature, not to me.

Olly

I think it's down to what people prefer and find more interesting / challenging rather than having comparison of skill level involved.

You are in your "comfort" zone with imaging gear and acquisition. It is not that you can't move forward with it - you don't feel the need for it (or can't justify expense). You also find more satisfaction and enjoyment in processing images, so you regard acquisition stage as "necessary evil", or rather something you need to do to get the data that you enjoy playing with. 

If you needed more data to play with - it would surely prompt you to move forward with your setup - why not go deeper or sharper or larger or more scopes? Someone will find their challenge and satisfaction on that side, and be happy with less than optimal processing because they are driven by how good data they can acquire.

Link to comment
Share on other sites

6 minutes ago, Stub Mandrel said:

Can someone explain why it';s OK to use long exposures to reveal detail we can't see with our eyes, but some people find it dubious to bring out colour that we can't see?

I also have objection to color side of things, but I'm not sure what are you aiming at with this.

If it's saturation, I can give my view on the topic, and also address my own concern / objection.

There is both difference and relation of how brightness and color work together. I object to often heard statement - that color in astro imaging is arbitrary (either because it is hard to see, or because it is "not well defined").

Brightness of object changes with distance, and because of vastness of space, astro photography encompasses huge dynamic ranges. Our display devices don't have the means to reproduce such dynamic ranges and also our perception and ability to clearly see and examine is better if dynamic range is reduced. This is why we do stretch on brightness - to aid our perception.

Nothing in physics changes the color of light from objects - it is very well defined and it is not impacted by some parameter (except maybe by red shift for very distant objects, or extinction from interstellar matter, but these are fringe cases) - so there is very real color out there. It is our perception of the color that changes, but it is understood how that happens and we can use our understanding to accurately present colors as would be seen under certain circumstances.

Doing color balancing or saturation by feel could be considered as cheating if we were being pedantic about it - one is changing physical properties of captured object (light that it gives off) akin to changing size. Still I don't consider that to be cheating, and I think that most people agree with this notion that it should not be considered cheating. On the other hand, given the choice - I would prefer to process my images in such way as to preserve this information.

Granted, one could apply the same above argument to brightness (again physical property), but we have some "freedom" there - one can choose how to interpret the change - as making things brighter or closer :D

 

Link to comment
Share on other sites

56 minutes ago, vlaiv said:

I also have objection to color side of things, but I'm not sure what are you aiming at with this.

If it's saturation, I can give my view on the topic, and also address my own concern / objection.

 

Colour is the most subjective, although I hear your comment that it can be objective. But one of my eyes perceives colours as a cooler colour temperature than the other. Which one is right? I also have very, very slight red/green colour blindness that only affects dull brown and olive colours.

My approach is, generally, to balance the background and then just raise the saturation using various subtle approaches. To the eye not many stars are as clearly orange or blue as Rigel and Betelgeuse.

The real issue is with nebulae; I really can't say what the 'right' colour is. RGB imagers use filters with steep sided curves so any single-wavelength source comes out as one of pure red, green or blue. No difference between yellow-orange and deep red.  Don't believe me? Use astro RGB filters to take an image of a true spectrum.

Black-body sources like stars are also represented unrealistically, although they look more 'normal'. These images, colour wise, are really as false colour as narrowband images. So are images made with OSC filters that mimic the human eye's response more accurate? Of course not, because every camera seems to have a different response curve, even with the same manufacturer. Who can say which camera is correct. The only way to know the true colour of a star is spectroscopy and while using a star of known spectral class to calibrate your image, the vagaries of filters mean that this will actually only be spot on for other stars of the same class.

IMHO, of course ?

Link to comment
Share on other sites

4 minutes ago, Stub Mandrel said:

Colour is the most subjective, although I hear your comment that it can be objective. But one of my eyes perceives colours as a cooler colour temperature than the other. Which one is right? I also have very, very slight red/green colour blindness that only affects dull brown and olive colours.

My approach is, generally, to balance the background and then just raise the saturation using various subtle approaches. To the eye not many stars are as clearly orange or blue as Rigel and Betelgeuse.

The real issue is with nebulae; I really can't say what the 'right' colour is. RGB imagers use filters with steep sided curves so any single-wavelength source comes out as one of pure red, green or blue. No difference between yellow-orange and deep red.  Don't believe me? Use astro RGB filters to take an image of a true spectrum.

Black-body sources like stars are also represented unrealistically, although they look more 'normal'. These images, colour wise, are really as false colour as narrowband images. So are images made with OSC filters that mimic the human eye's response more accurate? Of course not, because every camera seems to have a different response curve, even with the same manufacturer. Who can say which camera is correct. The only way to know the true colour of a star is spectroscopy and while using a star of known spectral class to calibrate your image, the vagaries of filters mean that this will actually only be spot on for other stars of the same class.

IMHO, of course ?

This is broad subject, but I'll address some things you said.

You are right about recording devices, but we need to add display devices (and print) as well. Both sensors and displays have something called gamut, or part of complete color space (by complete color space we mean all colors that average human is able to distinguish as separate colors) that they can record / reproduce.

Take for example - pure spectral colors - or single wavelengths. Standard trichromatic displays (sRGB  gamut) can't display any of them properly, but standard sensors can record them "properly".

It is also interesting to think about pure colors from above "steep sides" filters point of view. Take a look at following diagram:

image.png.ca28466c13d17e7c923d61b8870ed736.png

This is sensitivity of human cone cells (3 different types, short, medium and long) - this graph is normalized (peaks have different intensity for those three).

Now let's consider two different light sources: one at 680nm and one at 690nm. Will you be able to tell the difference between those two colors? The answer is: yes, very small difference, but one should be able to tell.

But what is important to understand is following: if observer was presented with two light sources both at 680nm, first slightly higher intensity than the second one, observer would also conclude that those two colors are different - and they would not be able to tell if color difference is due to intensity or frequency.

This points to very important phenomena - color perception depends on intensity of light not just mixture of wavelengths.

This is where sensor response curve comes in handy even if filters are sharp edged. Two different red wavelengths will be recorded at different intensity. To our perception of color it does not matter if it's the same frequency with different intensity or is it two different frequencies with same intensity - we will see them as different colors and sensor will record "different color".

image.png.69ef7a1ad6abe7e83f2d681096f56896.png

Above image shows colors made only out of red color but at different intensity - we can all agree that we see different colors (same hue, but different colors).

If one can transform sensor/filter response to these matching functions, via suitable transform:

image.png.936851e3042151fe75df6005d86fb6df.png

(CIE XYZ standard observer matching functions)

then one has full gamut sensor. Even so, image recorded with such sensor would look "flat" on monitor that is only sRGB - because it would be monitor lacking the capability of color reproduction.

If we accept limitations of both technologies, and I believe that bottleneck is in displays rather than cameras - even my DSLR has choice between wide gamut recording mode and sRGB - but most displays are sRGB (there are wider ones but are rare and not many people use them, and even sRGB fall short to cover full sRGB gamut in some cases), then I would say one has the ability to capture proper color and display it (within gamut bounds).

One has to be careful how to process images as well to account for all of that - but I believe it can be done (transforming to XYZ and LAB spaces - separating chromacity from luminance, but accounting for luminance in chromacity, then stretching only luminance or replacing it with L channel and then properly applying chromacity back to luminance with luminance "modulation" to represent color).

Btw this graph shows nicely how much of colors do we miss out when using sRGB monitors:

image.png.44737c1d962e2b93a5e45e5ccf5d2bf4.png

(pure spectral colors are on outer edge of full gamut space - and this clearly shows that no sRGB device will display any of the rainbow colors correctly)

 

Link to comment
Share on other sites

I think, from a "philosophical" viewpoint, the hardware selection and
(software) pre/post-processing of scientific data are ALL legitimate. 
It is considered good practice to show/describe your methodology? ?

Hmmm... The whole of Particle Physics in just one plot? lol. 

03000070.jpg.90537674278fbe88f1e39894ca3a9000.jpg

But analogous to polynomial gradient subtraction in Astro imaging. ?

Perhaps it is harder for Amateur Astronomy where the *diversity* of 
methods is inherently large and individual. But then conventions do
exist... like the "Hubble Pallet" (which I personally don't like, but... )! ?

P.S. The above is purely conversational... and a personal reflection?
No one should FAKE data, but accusing people of "cheating", if it is
simply presenting a *pleasing* image, seems a bit extreme to me... ?

Link to comment
Share on other sites

7 hours ago, vlaiv said:

This is where sensor response curve comes in handy even if filters are sharp edged. Two different red wavelengths will be recorded at different intensity. To our perception of color it does not matter if it's the same frequency with different intensity or is it two different frequencies with same intensity - we will see them as different colors and sensor will record "different color".

Hi Vlaiv, no sharp edged filters can't differentiate. What wiull tehse filters tell you at 680 and 690. Or even 600 and 690?:

BA_RGB_Kurve.jpg

Link to comment
Share on other sites

On my first astronomy course, UCLAN's Introduction to Astronomy, one of the tasks was to go out and estimate the colour of ten stars, putting them in order from coolest to hottest based on their colour. I was expecting this to be harder than it turned out to be but I got the order right (ie in accordance with the astro physics) with the exception of two stars, adjacent in my answer, which I had the wrong way round. So I don't think star colour is all that controversial unless I'm missing something.

In broadband imaging there will clearly be differing renditions of colour temperature, intestity and fine nuance but, surely, a broad agreement is possible. Take the cores of spiral galaxies. If we look into the core of our own we get this:

M11%20Barnard%20111-S.jpg

or this:

SAG%20TRIPLET%20TEC%20MOSAIC-S.jpg

Surely this is in broad agreement with what we see of the cores of very remote galaxies?

NGC4565%20Web-M.jpg

M95%20CROP%20web-M.jpg

The golden results of dust-reddening strike me as confirming each other in these images and I can't accept that the colours we record are arbitrary. They are not precise but they offer a good approximation, I think, of what we'd see if we had far more colour-sensitive eyes.

Olly

 

Link to comment
Share on other sites

1 hour ago, ollypenrice said:

In broadband imaging there will clearly be differing renditions of colour temperature, intestity and fine nuance but, surely, a broad agreement is possible.

Absolutely agreed, but 'broad agreement' is my point.

Note that because stars are effectively black body radiators (we can ignore spectral lines in the context of broadband imaging) astro RGB filters will give reasonable results. Where they struggle is differentiating narrowband sources (anything longer than the sodium lines (greeny yellow) gets rendered as red (i.e. any narrowband source from yellow to infra red is rendered the same colour).

Link to comment
Share on other sites

30 minutes ago, Stub Mandrel said:

Anything longer than the sodium lines gets rendered as red...

Apologies for selective quoting. But I share your circumspection... Spirit of inquiry? ?
There is a LOT of discussion on how to represent scientific colour on e.g. a WEB page?
My general conclusion is that it is "non-trivial", as those scientists are wont to say... ?

I do think we (happily) deal in qualitative... rather than quantitative results (mostly)? ?

Link to comment
Share on other sites

3 hours ago, Stub Mandrel said:

Hi Vlaiv, no sharp edged filters can't differentiate. What wiull tehse filters tell you at 680 and 690. Or even 600 and 690?:

BA_RGB_Kurve.jpg

Let's take 680 and 690nm as example. Filters will let them thru in equal measure (let's simplify and approximate away percent or less of difference for purpose of argument).

Filters are not recording device - filters are "selectors". Actual recording device will have sensitivity graph similar to this:

image.png.79d9f8d960fd53573c71bdbe69a18530.png

Actual detection sensitivity graph of system (sensor + filter) will be those two curves multiplied. If we assume red to be 600-700nm sharp and 100% in filter - it is the same as observing above graph for sensor just in this region (other values will be 0)

From above graph it can easily be seen that for two monochromatic sources of light, one at 680nm and other at 690nm of same intensity - sensor will record two different values. It can differentiate between two sources of different wavelength and same intensity. It will also differentiate between two monochromatic sources of same wavelength but different intensity.

What it can't differentiate is which one of two cases happens to be (two wavelengths same intensity or one wavelength two intensities). But this it the same thing human eye does (or eye/brain system)!

We also can't tell if two wavelengths are the same of different intensity (look at above example with red - those 4 colors are made up of exactly the same wavelengths - pixels can't change their frequency response, but they can change intensity), or different but with same intensity.

Thus sensor/filter system is capable of making same distinction as human eye/brain. If it can't do that to all combinations - we say that it does not have complete gamut that human eye has. So it will not be able to distinguish some things that human eye/brain can. But human eye/brain system also can't distinguish some colors that some sensors can - for example no way that we can tell difference between 680 and 680.1nm - it will be the same color to us (this is just monochromatic example - but any intensity/wavelength combination that will trigger our cells in same amount - we will see as the same color - although physically different) while spectroscope with enough precision will be able to tell.

So we can indeed use camera/filter system to build sensor that will cover certain gamut, and I would argue that such combination covers wider gamut than displays are able to reproduce. So we have to go by lowest common denominator here - and work with sRGB gamut for example if we want our images to represent true colors within limitations of most common display technology.

Link to comment
Share on other sites

1 hour ago, vlaiv said:

What it can't differentiate is which one of two cases happens to be (two wavelengths same intensity or one wavelength two intensities). But this it the same thing human eye does (or eye/brain system)!

We also can't tell if two wavelengths are the same of different intensity (look at above example with red - those 4 colors are made up of exactly the same wavelengths - pixels can't change their frequency response, but they can change intensity), or different but with same intensity.

The eye determines colour by the ratios of different signals, the overlapping frequency responses are a near-optimal solution to a problem. The really clever thing is not that different colours are a particular ratio of two signals but a unique ratio of three signals, independent of luminosity.

For example we detect violet not so much because it triggers our blue detectors, but because it hardly registers on the red and green ones.

You can trick the eye with a trichromic light source much of the time, but because normal displays generate blue light at a frequency that also triggers green and red cones, true violet lies outside their gamut.

The extraordinary thing is how crudely you can balance R,G and B and still achieve so many colours.,

 

Link to comment
Share on other sites

23 minutes ago, Stub Mandrel said:

The really clever thing is not that different colours are a particular ratio of two signals but a unique ratio of three signals, independent of luminosity.

This is not quite true - part "independent of luminosity" is wrong, here is quick example:

image.png.450bd6726cbdbb00e50a9342b2d40e58.png

These two distinct colors have exactly the same ratio of R, G and B - yet we see them as different colors. First one is 200:80:20, second one is 100:40:10 (same ratio of 10:4:1)

sRGB color space is not linear in change - this is why there are CIE LAB and CIE LUV color spaces - LAB is intended to be perceptually linear, while LUV is designed to be linear in addition (as in addition of light).

CIE XYZ matching functions best describe human eye response and cover whole gamut:

image.png.77cf28d36f2900c6fb9d2b6bc42db1fd.png

Note the very small region of overlap for all three components.

It is important to remember that we don't want to reproduce spectral density of source - only psychological response that we define as color. If for different things (distribution of wavelengths and intensity under certain circumstances) we get same psychological response - we see the same color.

Color is not physical property. There are even impossible colors - ones that no light can produce but people are still able to see (this can happen because color is product of our brain and under certain circumstances our brain sees color that is not due to physical nature of light but different aspects of how our brain works) - here are more details on that subject:

https://en.wikipedia.org/wiki/Impossible_color

 

Link to comment
Share on other sites

1 hour ago, vlaiv said:

This is not quite true - part "independent of luminosity" is wrong, here is quick example:

We are indeed getting into the psychology of colour as much as the physics and biochemistry; the exact shade we perceive depends greatly on what colour is is seen against, which is why colour matching of swatches is always done against a standard 15% grey background.

Google 'colourscape' it appears to still be going, it visited Aberystwyth when I was there in the early 80s but I never went, unfortunately.

Link to comment
Share on other sites

Just wanted to apologize to everyone for derailing the thread. Although discussion took a technical turn, I think it is safe to say that we all agree that adjusting color is not considered to be cheating by most of us?

This "small" digression into color recording / perception has been inspiring to me - I just figured out that I want to do a bit of research into sensor / filter gamut and possibly create software that will take curves of sensor/filter responses and calculate gamut and transform to CIE XYZ based on these (probably least squares fitting on some form of transform, not necessarily linear).

Link to comment
Share on other sites

14 hours ago, Macavity said:

No one should FAKE data, but accusing people of "cheating", if it is
simply presenting a *pleasing* image, seems a bit extreme to me...

I took a pic' of my wife and her sister from behind looking out to sea and she suggested I could airbrush it so her rear looked smallest , I was horrified but she said it was no different to my astro' image processing :D

Dave

Link to comment
Share on other sites

This topic always generates an interesting thread.

On the equipment front, I cannot agree that using top end equipment is not ‘real’, I’m fortunate to own some high end equipment and I chose to spend my hard earned cash on it because I want to capture the best quality data I possibly can. If I could persuade Mrs Tomato to move to the Atacama desert I would use it there, but sadly she won’t move from the cloudy UK, but I am going to move close to some of the best skies the UK has to offer.

I do love the data capture side of the process, if you take down and set up it must be a skill, because after a a lay off of a few weeks I sure as hell cannot do it as well as if I was going out every night.

I need the best data possible because I don’t get fired up by image processing. For me, the best image is one that requires the minimum of processing, alas I’m always battling poor SNR with minimal integration times, so l am always in “silk purse out of a sow’s ear mode” when processing which for me is never very rewarding.

As to the art vs science debate, before photography was invented, images were drawn which obviously had a subjective element, but this data was in my view, rightly considered to be valid scientifically, even though it resulted in some erroneous conclusions (e.g. the Martian Canals). If you want more ‘accurate’ photographic renditions of what’s visible, we could go back to emulsion film, which I wouldn’t want to do for all the stars in the universe.

Link to comment
Share on other sites

16 hours ago, Davey-T said:

I took a pic' of my wife and her sister from behind looking out to sea and she suggested I could airbrush it so her rear looked smallest , I was horrified but she said it was no different to my astro' image processing :D

Dave

Grounds for divorce!

:icon_mrgreen:lly

Link to comment
Share on other sites

I was thinking about that concept of ownership of a photo.

A few years back, when my own equipment still had a lot to be desired, I fancied a go at processing some good data for a change, and narrowband at that, so I went on to the Nasa Hubble archive site and downloaded a bunch of raw data to process.  The data was in varying states, I had about 8 different filter's worth of stuff to use, so it was a proper full processing job, very enjoyable and the end result undeniably beautiful.  I put a caption right on the photo saying what it was so that there was never any confusion.

However, not once did I ever feel like it was my photo.  In fact in the end I found it embarrassing, like I was passing it off as my work - I'd be scrolling through my work showing someone new - "Wow, look at that one !"  "Oh no, that one's not mine, I processed it from Hubble data", "oh".  I ended up deleting it from my Flickr account in the end. 

 

catseye.png.96084755190591918a1cd1252d4d45cf.png

 

I imagine I'd feel the same if I was renting time on a remote rig - pay my money and get back a set of perfect subs in the post.  In fact I reckon I'd end up sending in odd coordinates just so I could check they actually took it for me and didn't just mail me the same data as the last person who asked for the Cats Eye.

 

Conversely, over on the Beginner's forum, we occasionally get people posting up asking for processing help - they attach a link to a set of raw data and then various people have a go, bringing back various renditions of a final product.  Are all those renditions still the OP's photo ?  Yes, 100%, every time !

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.