Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

Astrophotography pop art - critique of the process


vlaiv

Recommended Posts

How did we get to this?

image.png.02ef93fb4b6e03df8cae1dc50cc11c54.png

Any resemblance?

Montage.jpg.ed9acbfd475bc2cd484e13f7120d3b7f.jpg

I do get that different people will produce different images - but this is all from the same data, how come that there are such a different color results?

You can see original works here:

 

  • Like 1
Link to comment
Share on other sites

What people individually like to see outweighs astrometric and photometric data sometimes, among other reasons. And this is completely fine once disclosed (whether RGB or narrow band combinatorial representation). When color is removed and the same individuals were to reprocess, it would be interesting to see the differences. Structure and possibly detail maybe not 'fighting' with color representation. 

e.g. IFN, in my opinion only, confuses some galaxy imaging, as from my previous imaging life it was just milky way noise that interfered with spectra and star HDR and PSF measurements. It is nice to bring this out because it is there, but it becomes odd (again, for me) to see IFN brought out when galaxies are in the frame where the galaxy rendering is 'sacrificed' a little for the sake of the material within our own galaxy - its not easy to do so well that both are delicately normalised non-linearly wrt to each other. Sometimes it's not clear which target (good example is IFN and M81/M81 here) is preferred by the imager.  IFN around stars are beautiful images I think, but again my personal preference. Polaris being a good one for deep images. 

Was talking to a friend recently about data plots at work, and the suggestion was to change the color palette for the spectra lines, since some are not clear as she is color blind (specifically deuteranaopic (green weakness)). Those with protanopia (red weakness) and  tritanopia (blue/yellow weakness) may see things differently too and I wonder if that plays a role sometimes? For our spectra data, we switch to gradients of brown towards blues which worked for her.

then there is focus, and I have some some images that look (consistently) out of focus, but possibly perfect to the imager. Those images were each time perfectly the same level of defocus, even though an autofocuser was used with best score-type parabolic curve estimate etc. It was the processing that the reduction in sharpness was missed/changed, consistently based on their eyesight.

Lots of reasons for thee star difference I guess, the spice of life I suppose. And then some of us just like to make it as we want, and ignore the way it 'really' is (if you could hear me speak that last clause, I assure you I mean no accusatory tone in color palette choice! ).

  • Like 1
Link to comment
Share on other sites

My critique was specifically about "choice of color palette" as you put it.

These images are result of the process of the same data. If data captured has certain ratio of R, G and B (for each pixel), regardless if that ratio is correct one (if it has been color calibrated or not) - how can it end up being vastly different color?

Interesting part is that this is not case of odd image that shows different color from the rest - they all show different color in one part of the image or other.

 

Link to comment
Share on other sites

Well, I’d venture there are no external or similar self-imposed constraints on varying the colour, and as the adjustments that can be made in the software are many and varied, this is the result. 

This IKI dataset I found particularly challenging, trying to balance the IFN with the galaxies, this might have something to do with it. I produced several versions before arriving at the one I posted. I do like processing luminance data to tease out faint and fine detail, and achieve optimum brightness, contrast etc, but colour processing leaves me cold. This might sound like heresy, but if someone could produce a standard workflow that gave an ‘authentic’ colour result every time I’d be happy to just press that button. Of course that is never going to happen because all of our raw data is unique. I find the ‘auto’ settings for colour adjustments usually produce a result far away from what is regarded as acceptable, particularly with LRGB data, NB being more subjective anyway.

It’s not just different individuals take on the same data. Here are are a couple of M31 images which I assure you are using the same OSC data. The second one is about 12 months after the first, thinking the extra time would have honed my processing skills a bit, but I actually still prefer the first one.

4118DB83-4B4B-42D9-9EA1-BC1BF961F83B.thumb.jpeg.d95d98686cbeda85b051e5c8499fe5b1.jpeg

43A9A3C4-E6CA-417E-AA0C-6079BC6D1E71.thumb.jpeg.3032870008b0f00bcd82b5ef1ff8f90d.jpeg

 

That’s why I prefer image capture to processing, much more straightforward.😉

Link to comment
Share on other sites

5 minutes ago, tomato said:

Of course that is never going to happen because all of our raw data is unique.

I think that this is just perpetuated myth in AP.

All cameras have unique raw data - yet manage to reproduce color without much trouble.

Look at this image for example:

image.png.d3f09e9e7123013c4606e82eb7279703.png

I've taken this image to show how easy it is to reproduce color. I took color wheel and displayed it on my computer screen - then I took my smart phone and made image of it (this was actually real time view of what the camera is seeing) and I took another smart phone and made above image.

Without any special calibration or color balance (you can see that ambient light is fairly warm) - these devices were able to fairly accurately reproduce colors. There might be some saturation and brightness variation - but these colors match. It's not like blue has been replaced with red or whatever.

If smartphones can easily do it, why do we have such hard time to accurately represent color (maybe because smartphones are, well, smart? :D ).

  • Like 1
Link to comment
Share on other sites

1 hour ago, GalaxyGael said:

What people individually like to see outweighs astrometric and photometric data sometimes, among other reasons. And this is completely fine once disclosed (whether RGB or narrow band combinatorial representation). When color is removed and the same individuals were to reprocess, it would be interesting to see the differences. Structure and possibly detail maybe not 'fighting' with color representation.

My view is that if all one wants a pretty picture (and that is fine by me) the gloves are off when it comes to munging your data. De gustibus non disputandem est.

If you want something which is of scientific value you should do as little damage to the raw data as possible,IMO.

I prefer science to art, but that is just me.

Link to comment
Share on other sites

7 minutes ago, Xilman said:

My view is that if all one wants a pretty picture (and that is fine by me) the gloves are off when it comes to munging your data. De gustibus non disputandem est.

If you want something which is of scientific value you should do as little damage to the raw data as possible,IMO.

I prefer science to art, but that is just me.

What about something in between?

I often read here on SGL that people starting with astrophotography want to just capture what is there and share that with their friends and family.

We can't say that such images are necessarily taken for their scientific value, but when I read the above - I can't help but think of vacation photos people take. Say you were in Greece on your summer holiday and you went to Acropolis of Athens. You want to share that experience with your friends and family and you take image of say Parthenon.

What is the image you want to share with your friends and family?

This one:

first.jpg.a14a255b5f496c651f3b7f9109e01deb.jpg

Or this one:

second.jpg.4fb9297d40cdf6d950ce49b4d0e26610.jpg

If we attribute all to artistic freedom, just because there might be lack of knowledge for proper processing workflow - are we doing disservice to others that come after us and look up to current work as reference?

It is no wonder that myths like "no correct color" are being perpetuated.

  • Like 4
  • Haha 1
Link to comment
Share on other sites

4 minutes ago, vlaiv said:

What about something in between?

I often read here on SGL that people starting with astrophotography want to just capture what is there and share that with their friends and family.

We can't say that such images are necessarily taken for their scientific value, but when I read the above - I can't help but think of vacation photos people take. Say you were in Greece on your summer holiday and you went to Acropolis of Athens. You want to share that experience with your friends and family and you take image of say Parthenon.

What is the image you want to share with your friends and family?

This one:

first.jpg.a14a255b5f496c651f3b7f9109e01deb.jpg

Or this one:

second.jpg.4fb9297d40cdf6d950ce49b4d0e26610.jpg

If we attribute all to artistic freedom, just because there might be lack of knowledge for proper processing workflow - are we doing disservice to others that come after us and look up to current work as reference?

It is no wonder that myths like "no correct color" are being perpetuated.

I would argue that both are pretty pictures, each with different artistic merit.  Andy Warhol made a living out of such things. Art has only a glancing connection with science.

Personally I would save, and make available, the original raw data for subsequent scientific and/or artistic processing.

Link to comment
Share on other sites

42 minutes ago, vlaiv said:

What about something in between?

I often read here on SGL that people starting with astrophotography want to just capture what is there and share that with their friends and family.

We can't say that such images are necessarily taken for their scientific value, but when I read the above - I can't help but think of vacation photos people take. Say you were in Greece on your summer holiday and you went to Acropolis of Athens. You want to share that experience with your friends and family and you take image of say Parthenon.

What is the image you want to share with your friends and family?

This one:

first.jpg.a14a255b5f496c651f3b7f9109e01deb.jpg

Or this one:

second.jpg.4fb9297d40cdf6d950ce49b4d0e26610.jpg

If we attribute all to artistic freedom, just because there might be lack of knowledge for proper processing workflow - are we doing disservice to others that come after us and look up to current work as reference?

It is no wonder that myths like "no correct color" are being perpetuated.

In part we are, and without a statement of what was done then certain colors (not color palettes) become common, become liked, become normal, become apparent 'fact'. 

Where someone states what was changed, in an image, then it is OK in my view,  but interesting to ponder why this happens and why a new way of showing detail in a target is liked when the colors are essentially ad hoc, and don't come from filters for spectral identification (original basis for SHO) , but for new aesthetics - I find that interesting to think why people like that. We would not normally change the sky color in that picture of Greece for record of the travels. We may in an artistic project or other reason. For some reason (hubble processing has a part in the root cause but for genuine scientific reasons), the publics and some of our choices seem to be far more variable for astroimages. In part due to the limited dynamic range and colors in the night sky compared to terrestrial like. In part because for deep exposures the sky is anything but block and so many more. In part because social appreciation of rendering/processing images leads to comparison, search for novelty, need to expose features but through color instead of luminance and many many other reasons. It's interesting to think about all the post-data capture aspects for objects in the sky that change more in structure than they do in color compared to terrestrial objects. 

It's so different to astrometry where the aesthetics mean little but intensity/magnitude (relative) etc. cannot be tampered with. 

Speaking only for myself, I can meander through what I see because I know the basis physics of the process in space and the signal processing and work things out and I have my own opinions of what I like. I think that is I why I gravitate toward rbg imaging with careful balancing, I hope, mostly. But, it is a form of photography and there is naturally going to be artistic license. The software we choose has so many possibilities and they selective color changes have ayed a big role. 

Wait unt AI gets involved. That worries me. Current iterations of software learn from terrestrial images, but also from the data store of our own astro images. Are we teaching it to form the wrong opinion that will automatically process data in the future? I'll bet so program like that will occur, they exist in photography, but a kingfisher bird wi never be represented as green instead of blue for the AI to train. 

  • Like 2
Link to comment
Share on other sites

11 minutes ago, GalaxyGael said:

Wait unt AI gets involved. That worries me. Current iterations of software learn from terrestrial images, but also from the data store of our own astro images. Are we teaching it to form the wrong opinion that will automatically process data in the future? I'll bet so program like that will occur, they exist in photography, but a kingfisher bird wi never be represented as green instead of blue for the AI to train. 

This is part that worries me. Not some future AI - but rather novice astrophotographers without clear understanding of color processing workflow - looking up images of that object online and then trying to replicate color / results.

Just by looking at other people's work - we are starting to form an opinion on what "good" astro photo should look like, and I'm not sure that I like that. I really do like the idea that happens in regular daytime photography. You can press the button and out comes image that really resembles what you are seeing with your eyes. No major color change except for say color adaptation if lighting was significantly different.

We know what sort of workflow produces such image in daytime photography - why not replicate it to astrophotography? That way we will get consistent color that represents the color of the object imaged.

  • Like 3
Link to comment
Share on other sites

9 hours ago, vlaiv said:

What about something in between?

I often read here on SGL that people starting with astrophotography want to just capture what is there and share that with their friends and family.

We can't say that such images are necessarily taken for their scientific value, but when I read the above - I can't help but think of vacation photos people take. Say you were in Greece on your summer holiday and you went to Acropolis of Athens. You want to share that experience with your friends and family and you take image of say Parthenon.

What is the image you want to share with your friends and family?

This one:

first.jpg.a14a255b5f496c651f3b7f9109e01deb.jpg

Or this one:

second.jpg.4fb9297d40cdf6d950ce49b4d0e26610.jpg

If we attribute all to artistic freedom, just because there might be lack of knowledge for proper processing workflow - are we doing disservice to others that come after us and look up to current work as reference?

It is no wonder that myths like "no correct color" are being perpetuated.

These are not photos of the same thing. The first is a photo of the Parthenon, the second a photo of a retzina hangover taken from the inside...  :D

Back to your main idea, though: firstly, the whole point of the Warhol panel is that we do know they are the same image, and the same can be said for the M81/82s in the thread. It's not as if they were entirely unconnected with their subject, though obviously there is variance.

2) These are amateur pocessing jobs and contain both individual choices and errors.  Only six of the nine have neutral dark grey background skies, so three of them are categorically wrong in terms of colour balance if the objective were to represent reality. To a first approximation we know that the background sky is dark grey and, even if someone wants to nit-pick against that, most imagers regard it as a desirable point of departure. Very, very slight deviation from parity in RGB is embraced by some expert imagers in full control of what they are doing and is done for aesthetic purposes, but most deviation from parity is simply done in error. I'll make so bold as to suggest that this is what's happening here. Note that, in the cases of the images with neutral skies, there is broad (though certainly imperfect) agreement on colour. Warhol would not be impressed by the degree of difference!

3) What is the role of the processor here, given that he or she is working with given data? Is it their job to process the data as it is supplied or to process it according to what they know about the objects?  Bending the data to fit what you think are the facts is as unscientific as it is possible to get and would see you thrown out of academia!  But then, checking the colours against the known astrophysics could be called calibration. This must go before the scientific high court for clarification!

4)  IFN or not, and how much?  Provided you don't invent any IFN, this is the choice of the processor. I've seen enough M81-82 images to last a lifetime but this excellent luminance data had something new (to me) in so far as the relationship between Arp's Loop, the IFN and Holmberg IX went. I made that a priority.

 

I did struggle with the colour in this dataset. My regular calibration methods flatly refused to work and the blue channel was unlike the others. This may have been entirely my fault somewhere along the line and, in finding a way to get something reasonable, I have ended up with a shortage of blue at the bright end.  I'm still playing with it but I know that what I have is not as representative of the astrophysics as I would like. This is also a competition, with the rules of engagement clearly identified as non-scientific. This just may influence the way folks approach it!  Not me of course: I would never play to the gallery :D🤣 

Olly

 

Edited by ollypenrice
Typo
  • Like 3
Link to comment
Share on other sites

But can you compare colours in daytime photography with light in abundance to astrophotography? When ambient light is scarce, colour is less well defined, as far as our eyes and brain interpret it. In Vlaiv’s example we know what the correct colours are because we already have a first hand image in your memory of the object or at least something similar to compare it to. I love the deep colourful renditions of the Milky Way but I’ve never seen it like that with my own eyes, even from a very dark site. If we just went with the scientific raw, unprocessed data, image wise we would see hardly anything.
If the colour information collected by the camera is a correct reproduction of the light entering the camera (and I agree it must be), then why do we use tools to remove the green? If we use the argument that the green is there in error  and not part of the image we are trying to reproduce, then surely we have crossed the “faithful reproduction” line and all bets are off in terms of what you can do with the processing?
 

8 hours ago, vlaiv said:

We know what sort of workflow produces such image in daytime photography - why not replicate it to astrophotography? That way we will get consistent color that represents the color of the object imaged.

That’s the button I was referring to in my previous post. When one appears in the latest releases of the software, I’ll be first in line.

  • Like 1
Link to comment
Share on other sites

There are likely several factors here. We have effects like having differently calibrated monitors so colours can look subtly (or even widely different).  Then we have our own limitations - there is obvious colour blindness but also as we age I believe we tend to see colours differently.  As such as you age you might naturally adapt images to enhance elements so they are more vivid in certain colours but to the individual because their eye is filtering some colour it is more natural.  The reality is that the only way to do 'true' colour is to agree a colour for each temperature of an object through spectroscopy and then when you have sampled enough objects in the image apply that colour cast to the image.  However it does mean that with any imaging must come a lot of spectroscopic data.   

  • Like 1
Link to comment
Share on other sites

Does APP’s Colour  Star Calibration Module work on a similar principle? The trouble is I get varying results when I use it. Sometimes it adjusts the colour of the DSO to something akin to what I regard as correct (based on existing library images) but other times it can be way off even though the stars appear to have been brought into line.

But once again this is me comparing the result to what I think is correct based on the body of images already out there, rather than anything scientific.

Link to comment
Share on other sites

This is Al Bean's painting 'That's how it felt to walk on the Moon'.

See the source image

Apparently, the most common question he was asked was 'What did if feel like to walk on the moon?'  - a question he did not feel able to answer adequately in words.

There is plenty of scientific data to describe the mechanics of getting to the moon, walking on it, and returning to earth. One of the few people ever to have done it, found that a painting better "elicited happier and more exhilarating thoughts and emotions, ones closely related to how it actually felt."

M81/M82 has never been seen by human eyes from earth without some form of optical aid. Who's to say what it actually looks like up close? Would it look the same to everyone?

I think there is room for limited interpretation in image processing/editing, if for no other reason than to be pleasing to the person doing it, so long as the data is in the original image. @vlaiv's image of the Parthenon is far more extreme than any of the images of M81/82. If one of them was processed with a purple sky background it would be considered ludicrous! These are just different interpretations, all of which have required skill and imagination to produce.

 

Link to comment
Share on other sites

APP uses basic blackbody emitter physics as a model and by analysing the number density of stars, it tackles them as being representative of majority main sequence stars which are by far the most common. Colors are then adjusted accordingly and you can tweak the fitting goodness of the model, rather than tweaking the selective colors in the image. It is a color-color diagram (related in part to Hertzsprung-Russell diagrams), and it calculates them directly from the data, rather than referring to a simbad, NER or other databases. Those databases are very well calibrated too, which is why I really like such photometric calibration. 

The only issue with nuts and bolts calculations in APP, is that the processor should be aware of issues. With a dual, tri or quadband filter, the analysis is skewed and the color-color diagram has a power law response instead of linear one since much of the spectrum is not continuous and so the blackbody model is not fully accurate. Blue and very red stars from a quad band filter OSC image get corrected to blue and yellow (more normal colors), but red nebulosity also gets rendered orange instead of red, so care is needed in that case. Databased photometry can sometimes be better since it applies it only to the stars usually (PI, Siril), but I have not tested carefully. My previous experience was always with RC scopes and mono images so the 'color' was replaced with metallicity measurements, didn't worry about it too much. APP, Siril and PI photometric calculations at least give a correct pan-image color calibration and if color saturation (even non-linear) is performed across the whole spectrum, then it remains consistent across the image even when very saturated, as opposed to selective color changes etc.

 

Link to comment
Share on other sites

3 hours ago, ollypenrice said:

3) What is the role of the processor here, given that he or she is working with given data? Is it their job to process the data as it is supplied or to process it according to what they know about the objects?  Bending the data to fit what you think are the facts is as unscientific as it is possible to get and would see you thrown out of academia!  But then, checking the colours against the known astrophysics could be called calibration. This must go before the scientific high court for clarification!

Color calibration does not require use of astrophysics and we don't need to check our data against known objects. It can be easily done by inspecting the behavior of measurement device - after all, that is what calibration means - adjusting measuring device so that it is in line with standard of measurement.

In this case, since above was not done, I don't really see the problem in using one known property of object to do the same calibration (like relationship between spectrum / temperature and color of the star - as there is no single star in the image - there are many and either we get a surprise that they are all different then regular stars or they are regular stars :D ). It is a bit like inserting ruler into image of object so we have comparison for size.

2 hours ago, tomato said:

But can you compare colours in daytime photography with light in abundance to astrophotography? When ambient light is scarce, colour is less well defined, as far as our eyes and brain interpret it. In Vlaiv’s example we know what the correct colours are because we already have a first hand image in your memory of the object or at least something similar to compare it to. I love the deep colourful renditions of the Milky Way but I’ve never seen it like that with my own eyes, even from a very dark site. If we just went with the scientific raw, unprocessed data, image wise we would see hardly anything.

You touch up here at a very important point regarding color.

We use color to represent both physical quantity and a subjective feel. It would be a good idea to distinguish the two.

It is a bit like temperature. We can measure temperature of an object with thermometer and record absolute value of temperature. We can also feel that object by hand and record what we feel. These two types of measurement can be in disagreement. If you put your hand in bowl of water that is at 10°C on a hot summer day - you'll feel that it is pleasantly chilling, but same water on a cold winter day will feel lukewarm.

We now have two different sensations for same physical quantity.

Level of light does not change physical characteristics of the light - it does not change its spectrum. Light will be of the same color regardless if there is little or plenty of it. Our perception / feel of it does change.

Once we control physical aspect of it - the we can choose to control feel aspect as well at a later stage - that is what Color Appearance Models do (CAMs - https://en.wikipedia.org/wiki/Color_appearance_model)

In order to measure physical property of light - we don't need to incorporate our feel into that yet, not we need to know what the object looks like. It is the same way we can measure 10°C of water in the bowl.

This is where I feel people fail - they skew those 10°C measured even before they get to stage where they can use that information to impart particular appearance to end observer.

3 hours ago, tomato said:

If the colour information collected by the camera is a correct reproduction of the light entering the camera (and I agree it must be), then why do we use tools to remove the green? If we use the argument that the green is there in error  and not part of the image we are trying to reproduce, then surely we have crossed the “faithful reproduction” line and all bets are off in terms of what you can do with the processing?

Color information collected by camera is correct reproduction of the light of course, and why do we remove green? Well, that is because we don't have full understanding of color and how it works.

Most people think that they can take raw "RGB" data from camera and use it as "the RGB" data for the image - and this is not true. Camera must be first color calibrated to the standard we use in image processing. Cameras have different QE curves for R, G and B, right? To get back to thermometer analogy - it's like we have thermometers with different number scales on them. No one would expect to get correct temperature in Celsius from arbitrary number scale on each thermometer - we would of course first seek transform rule, and after we do that - all thermometers will agree on their reading - they will all show same temperature, and in camera case - they will all show the same color.

2 hours ago, Whirlwind said:

There are likely several factors here. We have effects like having differently calibrated monitors so colours can look subtly (or even widely different).  Then we have our own limitations - there is obvious colour blindness but also as we age I believe we tend to see colours differently.  As such as you age you might naturally adapt images to enhance elements so they are more vivid in certain colours but to the individual because their eye is filtering some colour it is more natural.

We actually don't need calibrated display device in order to render proper color of the image. Calibrated display device is needed only if we are the ones choosing the color and we need our choice to conform to a standard. In astrophotography, if we want to show the real color of object - then we can't really choose it. Choice of color is for creative artists like designers.

We can argue that there is place for creative arts in astrophotography, and ok, I'll accept that some people will like to tweak color of object for artistic purpose, however, I also think that proper approach to that is to control the color from the beginning and having complete understanding of what one is doing - otherwise, it's just splashing some paint on the canvas :D

image.png.09b98555c5a01d912d80f45b1befba65.png

2 hours ago, Whirlwind said:

The reality is that the only way to do 'true' colour is to agree a colour for each temperature of an object through spectroscopy and then when you have sampled enough objects in the image apply that colour cast to the image.  However it does mean that with any imaging must come a lot of spectroscopic data.   

Luckily it is a bit more easy than that. It turns out that our eyes work a bit like very low resolution spectroscopes - producing only 3 numbers instead of hundreds of measurements along 400nm-700nm range. Same is true for cameras - they also produce three values and it turns out that these three values are enough to reproduce (more or less accurately) original color.

In fact - we have very well defined XYZ color space that does exactly that - pretend to be very low resolution spectroscope that is compatible with our vision system and if you take any two light sources that produce same XYZ value although they might have difference on finer scale of spectrum - people will see those two as the same color.

Trick of proper color management is to use above XYZ as a first step - we want our cameras to produce same values as XYZ - this should be our measurement scale that we all agree on.

58 minutes ago, Astro Noodles said:

M81/M82 has never been seen by human eyes from earth without some form of optical aid. Who's to say what it actually looks like up close? Would it look the same to everyone?

Seeing M81/M82 the way we see them in our images is probably physically impossible. Even if one could fly off to certain distance to the pair so that they appear as large - they won't appear as in our images. That does not mean that we can't observe the color of them or their shape.

It is obvious that we can observe their shape and features from the photo. We never argue if that is proper shape or proper features. Why is that? Because we can easily match those across different images. It is repeated same measurement.

Color is not - not because there is no proper color - but because we don't measure it properly. We don't produce the same color results because we don't use proper workflow. We justify that by artistic freedom but in reality we are robbing ourselves of getting to better understand the objects - much like if we would loose proper shape of object due to our processing.

We can never see bacteria with our own eyes - yet we don't think that images capturing them are somehow false.

1 hour ago, Astro Noodles said:

@vlaiv's image of the Parthenon is far more extreme than any of the images of M81/82.

Is it?

Montage_2.jpg.6ccd96cbf795422b372a966394049d9c.jpg

What color is the "sky" - blue, red, brown, pink?

I'm guessing that at most - one of them is correct, that means that all others are slightly off - just a difference between blue and red away :D

 

Link to comment
Share on other sites

I'll preface this by saying I fudged loads of stuff in my attempt at processing this, and certainly am not happy with the colour reproduction l got (and is something I'm working on for v2), but I think we have to allow for a certain degree of artistic license because this is a competition, and, processing errors notwithstanding, individuals need to make their images stand out from the others; one to do this is with variations in colour. In these such circumstances, l think it is acceptable to move away from true documentary photography, provided that the processes undertaken on the data are described and it is not presented as documentary.

If everyone stuck to true documentary photography in this, then we'd end up with x number of pratically identical images, which doesn't make for a great competition. 

 

Link to comment
Share on other sites

Comparing Andy Warhol's art with astrophotography images is not comparing 'apples with apples'. I don't quite understand what the point you are trying to make by comparing the two is.

One is an attempt to provoke an emotional response by deliberately altering the colours to be striking and unrealistic. The other is not. 

Are you saying that at least 8 of the 9 galaxy images are wrong? I don't think you can say that. They are personal interpretations of the data, not attempts at pop art. If one of the images were mine, I might feel quite insulted. 🙂

There is a problem in my mind in prescribing one correct way of doing processing/editing/colour. That is that there would be no reason to do it except as a dry, technical exercise as the 'correct' image will already have been produced somewhere by someone else.

  • Like 3
Link to comment
Share on other sites

I just realized something, and I need to apologize to all whose work I used.

I did not mean this as a critique of your work directly, but rather of the process that is predominant in AP community. I used your renderings because it gave me chance to easily emphasize what I think is wrong with AP in general - and not only your work.

Precisely because it is competition with the same data - hence same RGB values.

  • Like 3
Link to comment
Share on other sites

40 minutes ago, vlaiv said:

Color calibration does not require use of astrophysics and we don't need to check our data against known objects. It can be easily done by inspecting the behavior of measurement device - after all, that is what calibration means - adjusting measuring device so that it is in line with standard of measurement.

Doesn't this assume skies which are 1) perfectly consistent over time and 2) perfectly uniform in the distribution of the LP spectrum?

Olly

Link to comment
Share on other sites

1 minute ago, Astro Noodles said:

Comparing Andy Warhol's art with astrophotography images is not comparing 'apples with apples'. I don't quite understand what the point you are trying to make by comparing the two is.

One is an attempt to provoke an emotional response by deliberately altering the colours to be striking and unrealistic. The other is not. 

Are you saying that at least 8 of the 9 galaxy images are wrong? I don't think you can say that. They are personal interpretations of the data, not attempts at pop art. If one of the images were mine, I might feel quite insulted. 🙂

There is a problem in my mind in prescribing one correct way of doing processing/editing/colour. That is that there would be no reason to do it except as a dry, technical exercise as the 'correct' image will already have been produced somewhere by someone else.

Using Andy's work was just for emphasis. I find it quite amusing that you can take bunch of images of the same object, put those in mosaic and get something resembling Andy's art, because each of those has different color cast / mix.

First of all, I'd like to be clear about some things. I do apologize to anyone feeling bad because of this, like I said, this is not direct critique of their particular work - it is critique of the process we all utilize when processing astrophotography data.

However, I will not forfeit my right to criticize what I believe is wrong approach.

Am I saying that at least 8 out of 9 images are wrong - well, yes, that is what I'm essentially saying - but don't take it in context of this competition - take it in broader context. You can always justify this particular case by being a contest and having artistic freedom to do so - and I respect that. You can't really do the same for arbitrary celestial object and images found online.

Yes, take any galaxy and search for images and you'll find same or almost the same level of variation in color.

I see a lot of comments like (and again, I'm not referring to this thread alone - what I'm talking about has been predominant note for quite some time): this is only a hobby, color can be chosen in artistic manner, there is no real color and so on...

To me that seems simply like dodging the issue of proper color management.

Would you feel the same if it were something else:

image.png.dd41e1b322ecb3a3c9737ca6abd0790a.png

Here, look at my artistic freedom :D

I'm not advocating one prescribed / technical way of processing images. What I'm saying is that we need to embrace proper color management in the same way we embrace flats or darks - it is part of processing workflow - but maybe not part of post processing workflow.

I'd argue that one needs to master technique before they start to bend it in accordance to their artistic vision.

 

  • Like 2
  • Haha 1
Link to comment
Share on other sites

Just now, ollypenrice said:

Doesn't this assume skies which are 1) perfectly consistent over time and 2) perfectly uniform in the distribution of the LP spectrum?

Olly

Depends what your goal is.

Again - skies are local phenomena and we can measure it locally and remove its influence if we wish so.

Someone might want to capture - color as it looks from the surface of the earth - in that case, they will leave influence of atmosphere present. If on the other hand, we wish to capture color of object as seen from within our solar system - just orbiting the Sun - then we will remove influence of atmosphere.

 

  • Like 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.