Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

The Smokin' Hot Galaxy - M82 returns - this time properly smokin'


powerlord

Recommended Posts

From what I've seen there are a number of ways of achieving a good result and accuracy is a point of view. Ultimately each image is an interpretation by the imager. There is a place for both art and science, and all interpretations!

I can't comment on individual entries for the galaxy competition as I'll be one of the judges. Needless to say all the entries are interpretations of one kind or another and will be judged on both accuracy and aesthetic qualities :smile:

  • Like 1
Link to comment
Share on other sites

5 minutes ago, ollypenrice said:

I'd have thought the accurate way to capture Ha was through a red filter. In this case it is captured in its natural proportion, no?

Actually - no.

Interference red filter does not mimic our vision and once we capture data with it - we loose ability to be visually as accurate as possible.

Here is simple way to see this:

image.png.0ac96f4e39c0b4679613aa725fd7cf39.png

Look at typical response curves of astronomical filters. Imagine for a moment that you have 610nm light and 656nm (or Ha wavelength) light. 

You captured 100 photons of each - in two different images. Could you tell which source is 610nm and which is 656nm from those images? No, because both images would be exactly the same. 100 photons in red and no green and no blue data (or rather zero value).

Now if you look at this image - that is best approximation that can be shown on computer screen (sRGB color space) of colors of individual wavelengths:

image.png.4385984364b9de356f9e0979a860568c.png

You will see that we can tell these two apart when looking at the light itself. We can distinguish them by eye - but we did not capture them as distinct.

This is true for any type of red in the image - not just single wavelengths.

You can have 3 different scenarios:

1. Just Ha

2. Some other light mixed with Ha

3. Some other light (that might be red) without Ha

When using RGB + Ha filters - you will never reach point 1 and ability to render light as pure Ha (deep, dark red) if you don't use subtraction and remove Ha from the red channel. You also need way to distinguish two reds from each other (like 610nm and 656nm in above example).

There is a way to best do this that is based on science (much like used to create above spectrum image on sRGB screens).

 

Link to comment
Share on other sites

4 minutes ago, vlaiv said:

Actually - no.

Interference red filter does not mimic our vision and once we capture data with it - we loose ability to be visually as accurate as possible.

Here is simple way to see this:

image.png.0ac96f4e39c0b4679613aa725fd7cf39.png

Look at typical response curves of astronomical filters. Imagine for a moment that you have 610nm light and 656nm (or Ha wavelength) light. 

You captured 100 photons of each - in two different images. Could you tell which source is 610nm and which is 656nm from those images? No, because both images would be exactly the same. 100 photons in red and no green and no blue data (or rather zero value).

Now if you look at this image - that is best approximation that can be shown on computer screen (sRGB color space) of colors of individual wavelengths:

image.png.4385984364b9de356f9e0979a860568c.png

You will see that we can tell these two apart when looking at the light itself. We can distinguish them by eye - but we did not capture them as distinct.

This is true for any type of red in the image - not just single wavelengths.

You can have 3 different scenarios:

1. Just Ha

2. Some other light mixed with Ha

3. Some other light (that might be red) without Ha

When using RGB + Ha filters - you will never reach point 1 and ability to render light as pure Ha (deep, dark red) if you don't use subtraction and remove Ha from the red channel. You also need way to distinguish two reds from each other (like 610nm and 656nm in above example).

There is a way to best do this that is based on science (much like used to create above spectrum image on sRGB screens).

 

But here you're talking about the true colour of the red coming from the source and I accept your argument. However, there is also the matter of signal strength. Nobody adds Ha to perfect the colour. They add it to lift faint signal above the background and to find stronger contrasts in nebulae, by tracing the hydrogen specifically. This is an artificial but informative modification of the natural signal. Essentially it's a visual convention, I'd have thought.

Olly

Link to comment
Share on other sites

It looks to me like in the second image the arrow is pointing at the wrong spot. I'm no imager but I see the same structure, althougha little subdued. I think your Image is gob-smackingly awesome Stu.  ( Hope you'll forgive my arrow drawn with sausage fingers!)

2023-04-2610_07_58.thumb.png.bcd984eac707de368cc2e8b27a2a1b99.png

Link to comment
Share on other sites

7 minutes ago, mikeDnight said:

It looks to me like in the second image the arrow is pointing at the wrong spot.

My bad - I did not draw arrow precisely, but I thought it was obvious what I was referring to.

I was specifically referring to this feature:

image.png.1e1262c04bfed236018acb792b93c5b5.png

vs

image.png.6422fe89906ed4068263d70e6e9b77d3.png

While it has similar general morphology - details are different (to my eye).

What we identify as Ha filaments in Hubble image can be identified with Web like lines in upper image - except - in upper image we have ring in the center of the image with clear spike that goes almost vertically from it - and there is no Ha signal in bottom image that matches that.

That is another example where AI though it was seeing Ha signal and decided to put it there - although it is really not Ha signal at all

Link to comment
Share on other sites

I think this discussion needs to be in another thread really as it's quite a vast topic, and ai enhancement is only going to develop more into our lives.

The issue I have is how one can differentiate if judging an image from one which has had a lot of manual input and one where buttons and scripts have been applied, it's like judging a "most realistic image competition" and one paints a subject in traditional media, one uses a pantograph or a camera lucida and one uses a camera. They're all tools but the amount of manual effort, time and and skill required by the submitter differs. If an animator rotoscopes it's considered "cheating" as a traditional animator wouldn't dream of using such a technique. To get an idea of how bad AI will get, there was a very recent photographic competition where an ai generated image won, so where do you draw the line and apply differentiation and informed judgement when assessing entries?

At the end of the day AP from most of us is about creating nice pretty images, and not to detract from this excellent entry by @powerlord whom has created an excellent piece of work.

Edited by Elp
  • Like 1
Link to comment
Share on other sites

56 minutes ago, ollypenrice said:

However, there is also the matter of signal strength. Nobody adds Ha to perfect the colour. They add it to lift faint signal above the background and to find stronger contrasts in nebulae, by tracing the hydrogen specifically. This is an artificial but informative modification of the natural signal. Essentially it's a visual convention, I'd have thought.

Equivalent physical process for visual would be:

1. Amplify all light coming from the target - let's say by x100 times

2. Filter all light except Ha to let's say 1% of its signal strength - this can be achieved with narrowband filter that lets 100% of Ha pass while cutting down everything else to 1%.

That is what is wanted, right? Same image but only Ha signal specifically boosted by large factor so it's made visible?

Again - same procedure I mentioned can be used - except we would modify strength of Ha signal in the mix.

 

Link to comment
Share on other sites

2 minutes ago, Elp said:

At the end of the day AP from most of us is about creating nice pretty images, and not to detract from this excellent entry by @powerlord whom has created an excellent piece of work.

I guess that I'm caught up in technicalities like fact that we are doing astrophotography and not astroimaging, astropicturing or astrodrawing :D

 

Link to comment
Share on other sites

1 minute ago, vlaiv said:

I guess that I'm caught up in technicalities like fact that we are doing astrophotography and not astroimaging, astropicturing or astrodrawing :D

 

Don't worry, I don't like these "tools" either. How's it any different from using a reference image to trace out a mask for a cloud nebulosity for example and using said mask to contrast boost your own image to "reveal" the nebulousity in your own image. It wasn't in the original photos taken so shouldn't be there.

Link to comment
Share on other sites

3 minutes ago, Elp said:

Don't worry, I don't like these "tools" either. How's it any different from using a reference image to trace out a mask for a cloud nebulosity for example and using said mask to contrast boost your own image to "reveal" the nebulousity in your own image. It wasn't in the original photos taken so shouldn't be there.

It's not just the tools - I think I have different idea on how to value an image than most people. We could call it much more technical.

To me, astronomy image is valuable if it's more informative and more correct. If one can learn more from it.

In above example - to me Hubble image is better - not because it is visually more pleasing (nicer or prettier image) - but because it is clearer in showing certain features and making a distinction between features better.

If one calls for critique and suggestions - that is what I do, I stay within the limits of my mindset (most of the time, sometimes I'm able to switch to aesthetics of the thing and give comment on that) - I notice where the image is lacking in things that I hold important - and that is what I present.

Most other people probably prefer artistic side of the image, and place greater value on that (and I tend to agree in some aspects - like when an image manages to stir up emotions about the vastness of the universe or general awe because of sheer range of magnitudes of size, energy and so on).

 

  • Like 1
Link to comment
Share on other sites

48 minutes ago, Elp said:

Don't worry, I don't like these "tools" either. How's it any different from using a reference image to trace out a mask for a cloud nebulosity for example and using said mask to contrast boost your own image to "reveal" the nebulousity in your own image. It wasn't in the original photos taken so shouldn't be there.

It is totally different.  The modifications made to the image by BlurXterminator are made from analyses of that image alone.  In that sense they are like lots of other processes. Existing sharpening routines, for instance, identify and intensify small scale local contrasts which are already in the image. Even the most basic interventions, like stretching in Curves, emphasize what is already there. 

I think the misconception arises from the fact that Russ Croman trained his AI system on Hubble data. This does not mean that his AI somehow remembers Hubble data and later applies them.  The reason for choosing Hubble images was, I think, their high quality and consistency. This made a more accurate tool, it did not transfer Hubble information to its memory. (Anyway, with a Samyang 135 I can probably shoot a larger area of sky in one sub than Hubble took in its entire operational life .)

Olly

  • Like 2
Link to comment
Share on other sites

30 minutes ago, 900SL said:

Can we enter sketches in the galaxy competition?

PPuzzle.thumb.JPG.d419ab3cfedca29fa5773db1ef489617.JPG

 

A decidedly mischievous puzzle maker has been at work here. A childhood friend's granny would have done it easily, though. She used to cut the pieces to fit...

Olly

Link to comment
Share on other sites

2 hours ago, vlaiv said:

My bad - I did not draw arrow precisely, but I thought it was obvious what I was referring to.

I was specifically referring to this feature:

image.png.1e1262c04bfed236018acb792b93c5b5.png

vs

image.png.6422fe89906ed4068263d70e6e9b77d3.png

While it has similar general morphology - details are different (to my eye).

What we identify as Ha filaments in Hubble image can be identified with Web like lines in upper image - except - in upper image we have ring in the center of the image with clear spike that goes almost vertically from it - and there is no Ha signal in bottom image that matches that.

That is another example where AI though it was seeing Ha signal and decided to put it there - although it is really not Ha signal at all

except it was bog all to do with AI, as I showed the original stretched integrated image has the same structure.

  • Like 1
Link to comment
Share on other sites

4 minutes ago, ollypenrice said:

The modifications made to the image by BlurXterminator are made from analyses of that image alone.  In that sense they are like lots of other processes. Existing sharpening routines, for instance, identify and intensify small scale local contrasts which are already in the image.

There is however difference between the two.

Other processes work the same regardless of "any training" they had, or rather - they don't have training and their result is always the same for the same input image.

With AI tools - if you apply different training to neural network and present it with the same image - it will produce different output. In that sense, result of the AI algorithm is not made by analysis of that image alone, but rather input data set is actually (training data set, input image) rather than being just (input image).

That alone does not "disqualify" algorithm - it is the nature of the change applied to the image that is important. There are "fixed" algorithms that distort the image in what we could argue is unacceptable way. Problem with AI is that you can't really tell if it's going to do that or not for two reasons:

1. being too complex for simple analysis. It is very hard to predict output of algorithm of that complexity

2. not knowing complete input set - we don't know what sort of training neural network underwent.

 

  • Like 1
Link to comment
Share on other sites

9 minutes ago, powerlord said:

except it was bog all to do with AI, as I showed the original stretched integrated image has the same structure.

Did you apply NoiseX to that image you posted to show that feature is there or to linear data?

Image that you posted to show the feature has red channel completely clipped in that region and all the data was lost because of that:

image.png.c5dbaf1e0b17cbb27c6463e413fbe051.png

as such - it can't show Ha features properly whatever is interpreted as Ha feature comes from blue and green channel (which I guess it should not be happening, but that is again technical side of things talking).

 

Link to comment
Share on other sites

1 hour ago, vlaiv said:

It's not just the tools - I think I have different idea on how to value an image than most people. We could call it much more technical.

Well that certianly one word for it Vlaiv. 😁

My issue here I think is you jumped in with a not very constructive 'less AI' comment - rather than something truly helpful. As I subsequently showed - the issues are nowt to do with AI and everything to do with human editing and limited capture quality. It'd just have been nice if you'd pointed specific things out and asked why they looked that way rather than bringing in your AI bogie man argument - I think we all know your feelings on AI by now*.

stu

*though some say there is a non zero possibility you are an AI, and as an AI, just jealous of other AIs doing you out a job. 🤪

  • Like 1
  • Haha 1
Link to comment
Share on other sites

3 minutes ago, powerlord said:

Well that certianly one word for it Vlaiv. 😁

My issue here I think is you jumped in with a not very constructive 'less AI' comment - rather than something truly helpful. As I subsequently showed - the issues are nowt to do with AI and everything to do with human editing and limited capture quality. It'd just have been nice if you'd pointed specific things out and asked why they looked that way rather than bringing in your AI bogie man argument - I think we all know your feelings on AI by now*.

stu

*though some say there is a non zero possibility you are an AI, and as an AI, just jealous of other AIs doing you out a job. 🤪

Point taken.

I will be more careful of attributing certain artifacts to AI side of things, and will try to be more constructive.

  • Like 2
Link to comment
Share on other sites

7 minutes ago, vlaiv said:

Did you apply NoiseX to that image you posted to show that feature is there or to linear data?

Image that you posted to show the feature has red channel completely clipped in that region and all the data was lost because of that:

image.png.c5dbaf1e0b17cbb27c6463e413fbe051.png

as such - it can't show Ha features properly whatever is interpreted as Ha feature comes from blue and green channel (which I guess it should not be happening, but that is again technical side of things talking).

 

its not linear is it? if it was linear you'd see nothing.

it's the linear stacked image, stretched in Siril. no AI bogey men.

it:

Screenshot2023-04-26at12_51_54.png.0813610f4d4cd0d087c95a4b76479fcc.png

the bit in the final image:

 

image.png.96e054aba35aeb22cbee9dd2d36ab1f1.png.1893242eeb169d00050c17cf8be398cd.png

 

To my (human no AI eyes) both show the same 'web'. sharpening/contrast/colour curves/etc in the final image have accentuated it via those human eyes. No AIs weree harmed during it's making.

I agree it is not a 'web' in reality (aka hubble), but it is what I recorded - which short of making stuff up, is all I've got to work with.

Edited by powerlord
Link to comment
Share on other sites

2 minutes ago, powerlord said:

its not linear is it? if it was linear you'd see nothing.

it's the linear stacked image, stretched in Siril. no AI bogey men.

it:

Screenshot2023-04-26at12_51_54.png.0813610f4d4cd0d087c95a4b76479fcc.png

the bit in the final image:

 

image.png.96e054aba35aeb22cbee9dd2d36ab1f1.png.1893242eeb169d00050c17cf8be398cd.png

 

To my (human no AI eyes) both show the same 'web'. sharpening/contrast/colour curves/etc in the final image have accentuated it via those human eyes. No AIs were abused during it's making, though Mr NoiseX was politely asked to contribute - but only did imho made clearer what I would have done myself - trying to bring the structure it saw (and that I also saw) out more clearly.

I agree it is not a 'web' in reality (aka hubble), but it is what I recorded - which short of making stuff up, is all I've got to work with.

If Mr NoiseX was more sentient and smart maybe he'd have used his knowledge of what it was supposed to look like and made it look like hubble. Which might have made me happy, and given you an aneurysm. however I think we are safe for now - a few years before they start killing us all.

 

Link to comment
Share on other sites

1 minute ago, powerlord said:

I agree it is not a 'web' in reality (aka hubble), but it is what I recorded - which short of making stuff up, is all I've got to work with.

Could you post - just a crop of linear data of that region?

I'm intrigued to figure out what has happened with the data in order to display that web like feature. I have an idea, but would need the data to confirm it.

We see it as a web - and AI made in a web like structure because it is lacking color differentiation present in Hubble image. Out of 5 spikes going out from the center - top one is actually of the different color in HST image and represents background gap rather than some material. After being blurred - it start to have very similar shape to other features blurred (while features are different - blurred versions look morphologically the same) and since red channel is clipped - color also starts being the same (difference being in non clipped portion of the red channel - that is what I'm hoping to find in linear data).

Link to comment
Share on other sites

I think taking one artists representation of the Hubble data and treating that as gospel is flawed as well. A real comparison would be to go get the original Hubble data and apply the same processes to it. We could be arguing about features that are misrepresented in both data sets 🤷‍♂️

But if you have to blink back and forth to spot inconsistencies with Hubble data then you've done a very good job.

Link to comment
Share on other sites

7 hours ago, powerlord said:

THEN I'll enter it into the compo. Have no fear @wimvb I have never won anything and doubt this'll be any different. 😞

Same here. But that should not keep us from trying. You have an excellent image here, which only could have come out better if you'd set up your gear on a high mountain top. Best of luck when you enter it in the competition.

  • Like 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.