Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

What to do about stars


MartinB

Recommended Posts

Even with the help provided by the Xterminator suite of processing tools, I don’t think I’ll ever fully warm to this aspect of AP, although I find it much more painless since they came on the scene. I’ll definitely get the 30 day free trial of “EffortXterminator” if/when it is released and if it can produce results better than I can with my data then I will make a purchase and then happily concentrate more on the data capture side of this hobby.

Link to comment
Share on other sites

Happily if you gave a 100 different APer's the same set of files, the same software to process them with and even the order the processing should take. You'll still have 100 different images to view...

Having for years having had to produce sets of images and a signed piece of paper to say what I'd done, I now have great pleasure (and some frustration) in no longer having to justify the outcome. If I get something I like I keep it . If not it's worked on till I'm happy or discarded.

If stars are the 'stars' of the composition then I make them the stars. If the backdrop is the feature then stars become less relevant and are treated as such.

I enjoy both the capture and the following drama.

Edited by fwm891
  • Like 2
  • Thanks 1
Link to comment
Share on other sites

14 hours ago, ONIKKINEN said:

I would say its only a matter of time when we have a tool like an all encompassing AI where the user needs little to no input to turn a linear file into a processed, sharp, colour accurate and aberration free image (EffortXTerminator maybe?).

Not saying its the next few years but 10-15 years? Maybe, the field of AI infused software moves so fast its really hard to tell where it goes next.

Also not a fan of something like this, but there is a market for it i think so only a matter of time it happens.

I had wondered something similar. We already have it for questions, comments and prose in the form of ChatGP. Whether AI uses neural networks and other mechanisms within itself to assess and process data in the form of an image (Like Russel Croman's efforts), or whether (like Topaz) it is trained on available images and imposes that knowledge is another aspect. In the latter, given the we can process 10 images in 10 different ways according to taste, AI training on existing images will result in a whatever-you're-having-yourself outcome, maybe. these tools don't parse and learn from a standardized set of descriptions and processing steps. If we had these and had to log them for each image (like a FITS header, where such info was needed), in the form of final image meta data that gave the processing steps, values etc. that converted a raw image to what is posted, then it is possible that AI can see the influence of parameters and learn accordingly. But the artistic side of astrophotography is very inconsistent in terms of how much information is provided, and there is no reason for that to be necessary unless quantitative measurements etc. are needed. Like the professional astronomers, who need ghosting and blooming in the 'perfectly' linear CCDs to do the photometry and other measurements, many of us take quite similar stacked fits file and end up with something that aligns with our taste.

But, to each their own. I like star fields, and not a fan of diminishing them, since the nebula is never the main target for me if it is part of that region of sky. Others like the nebula and its details as their main interest. I am also sensitive to 'added stars' and see it immediately - balancing stretch for starless and stars can be tricky for some targets. While their representation is a function of the optical characteristics of the scope (airy disc for refractors, diffraction spike for some reflectors etc.,) their color, relative size, density are very appealing to me as the headline of an image, but not at all to some people. And between us all I get to look at lots of images and see almost everything about a target. 

Many complex processes are now single click to start with, and I predict it will be inevitable that automatic processing will become an option very soon. We already have it on phones and PCs for pictures but I foresee a series of single click 'views' for a fits that includes nebula prioritisation, starless option, reduced stars, color saturation (the candy crush color calibration will make a comeback :), noise reduction etc. all built in to single click options, just like b&w, sepia, vivid, soft, polaroid, and other effects we have for photos. End result may not be what is preferred by some or perhaps many, but there is a wealth of existing data, often voted as being excellent through awards or through forums discussion, that AI can find, rank and learn from.

In such as scenario, AI can track how many people liked an image, how soon they reacted to it, how many did and how soon after posting (emotional intensity), what comments were made in text and how often the image stood in regard over time, and many other factors. This is easy for AI to do and notionally, we all collectively would be the parents or the village that raised the AI child, i.e. the auto-process of a particular target based on all available data and our reaction to it aesthetically and technically.

Imagine that 😬

Edited by GalaxyGael
  • Like 1
Link to comment
Share on other sites

7 minutes ago, GalaxyGael said:

I had wondered something similar. We already have it for questions, comments and prose in the form of ChatGP. Whether AI uses neural networks and other mechanisms within itself to assess and process data in the form of an image (Like Russel Croman's efforts), or whether (like Topaz) it is trained on available images and imposes that knowledge is another aspect. In the latter, given the we can process 10 images in 10 different ways according to taste, AI training on existing images will result in a whatever-you're-having-yourself outcome, maybe. these tools don't parse and learn from a standardized set of descriptions and processing steps. If we had these and had to log them for each image (like a FITS header, where such info was needed), in the form of final image meta data that gave the processing steps, values etc. that converted a raw image to what is posted, then it is possible that AI can see the influence of parameters and learn accordingly. But the artistic side of astrophotography is very inconsistent in terms of how much information is provided, and there is no reason for that to be necessary unless quantitative measurements etc. are needed. Like the professional astronomers, who need ghosting and blooming in the 'perfectly' linear CCDs to do the photometry and other measurements, many of us take quite similar stacked fits file and end up with something that aligns with our taste.

But, to each their own. I like star fields, and not a fan of diminishing them, since the nebula is never the main target for me if it is part of that region of sky. Others like the nebula and its details as their main interest. I am also sensitive to 'added stars' and see it immediately - balancing stretch for starless and stars can be tricky for some targets. While their representation is a function of the optical characteristics of the scope (airy disc for refractors, diffraction spike for some reflectors etc.,) their color, relative size, density are very appealing to me as the headline of an image, but not at all to some people. And between us all I get to look at lots of images and see almost everything about a target. 

Many complex processes are now single click to start with, and I predict it will be inevitable that automatic processing will become an option very soon. We already have it on phones and PCs for pictures but I foresee a series of single click 'views' for a fits that includes nebula prioritisation, starless option, reduced stars, color saturation (the candy crush color calibration will make a comeback :), noise reduction etc. all built in to single click options, just like b&w, sepia, vivid, soft, polaroid, and other effects we have for photos. End result may not be what is preferred by some or perhaps many, but there is a wealth of existing data, often voted as being excellent through awards or through forums discussion, that AI can find, rank and learn from.

I'm a lot less persuaded than some that artificial intelligence exists at all.  Clearly the term does refer to something new in the way that our machines operate, but should it be called intelligence? It belongs in a long tradition of using anthropomorphic terms in computing. 'Memory,' for example, is used of a computer but not of a page of text - yet my dictionary 'remembers' the meaning of every word printed in it.

Anyway, regarding the 'automatic image' which some see as lying in the future (and it might well do so) we face a question which is already very familiar to us. If we want to listen to music we can press a button and listen to a recording. If we want an excellent meal we can go to a restaurant. So does this mean we won't want to play an instrument or cook a meal? Surely not. We do these things, and many other things, because we like doing them. If all we want is a picture of M31, there are thousands of them we can have in front of us in seconds and for free. But, surely, what we really want is to make the picture ourselves, no?

Olly

  • Like 3
Link to comment
Share on other sites

10 minutes ago, ollypenrice said:

I'm a lot less persuaded than some that artificial intelligence exists at all. 

The term AI kind of baffles me too.
As I see it human Intelligence really develops more through the mistakes it makes more so than the things it does correctly, although in many ways we never seem to learn 😂 But I will leave that argument there for all our sakes 🙂 

15 minutes ago, ollypenrice said:

If we want to listen to music we can press a button and listen to a recording. If we want an excellent meal we can go to a restaurant. So does this mean we won't want to play an instrument or cook a meal?

Nice analysis, but thinking about it further that is not quite true to what some people are maybe wanting.
Maybe it is more that they want to make an instrument but get a musician to play it for them, or they want to grow their own vegetables and rear a cow (heaven forbid) but then get a chef to create the meal.
And, whilst it is not what You or I want to do, if you are only interested in the data acquisition, then I guess I see no reason not to use some wonderful script. Like I said in PI there is one already that does it all and I tried it the once on some old data of mine and the results were acceptable, thankfully not quite as good as my attempt (well that was my opinion 🙂 ).
It would be a fair fight if then when the image was shared it had accompanying text to say 
 processed in full with PI DoItAll script, or whatever was used, but nothing to say they have to.

What would be a tragedy is if somehow the script basically platesolved your data and then retrieved other data to enhance the image or determine what should be there. If the process was a pure mathematical algorithm then fair enough, then you will never get a silk purse just the sow's ear if the data set is not up to much.

Personally, I am in your camp and want to grow my own and then take it to the table myself and also blow my own trumpet (something I am often accused of 🙂 ) but each to their own.

Thread has taken a bit of a detour from the OP's question , now isn't that unusual ?

Steve

  • Like 2
Link to comment
Share on other sites

1 hour ago, ollypenrice said:

I'm a lot less persuaded than some that artificial intelligence exists at all.  Clearly the term does refer to something new in the way that our machines operate, but should it be called intelligence? It belongs in a long tradition of using anthropomorphic terms in computing. 'Memory,' for example, is used of a computer but not of a page of text - yet my dictionary 'remembers' the meaning of every word printed in it.

Anyway, regarding the 'automatic image' which some see as lying in the future (and it might well do so) we face a question which is already very familiar to us. If we want to listen to music we can press a button and listen to a recording. If we want an excellent meal we can go to a restaurant. So does this mean we won't want to play an instrument or cook a meal? Surely not. We do these things, and many other things, because we like doing them. If all we want is a picture of M31, there are thousands of them we can have in front of us in seconds and for free. But, surely, what we really want is to make the picture ourselves, no?

Olly

I am in the same camp when it comes to choice, at least. So, aside from what I consider to be the main focus for my images and how they are processed, the big enjoyment is the act of processing and of course (I can only speak for myself) the use of my own gear to image. That could easily go down a rabbit hole of remote vs home etc., and I'm not referring to that, but for my case or remote cases, it is the ability for us to choose what we do and spending the time and effort to take and enjoy our own images of something that has a million version in existence already. 

We also have the chance, with careful dedication, to find faint and new things out there. On the fence whether computational approaches could tease out 'anomalies' in many, many images to guess at a common, but oft ignored data that could be new. I doubt it unless we image enough that we have that data to make the discovery anyway. AI does this frequently with scientific data sets, correlations, interpolations etc that many people miss due to the brute force nature of millions of options that AI can do rather quickly. But maybe that is really not relevant for us.

We will always have the choice option to enjoy taking our own images I think and that's the big enjoyment and emotional factor in the rollercoaster ride of imaging, but AI and networking algorithms that can link, predict and act on existing archival data, will be able to do what I wrote and more (sparked from a recent showcase from colleagues developing such tools for large data mining of battery systems to predict failure modes and new chemistries from existing knowledge, that blew my mind). But, like we do now, doubtful we will apply a phone app sepia and scratched film filter on our images because the option is there.

And i do think single click option can be done - just needs someone to have a good enough reason to implement it, and that might just be the death knell before I would ever happen. Then again, there is vaonis and others making essentially single click all-in-one systems. But the control and application of your knowledge and experience is very limited in turnkeys systems.

Anyway, I just bought a 460 ex :) out of technological nostalgia and this thread may have turned left at Orion.

  • Like 1
Link to comment
Share on other sites

We use tools to capture data and process it, some tools just make it easier. I used to manually focus with a Bhatinov mask but now let the auto focuser, camera and software do this for me and this is the point, achieve an accurate and reliable result (nearly!) every time. If and when processing tools can do this on my raw data, I’ll follow the same path. Can software do this on arguably a less quantitative aspect of AP? Time will tell. Sorry Martin, now way off topic.

  • Like 2
Link to comment
Share on other sites

7 hours ago, ollypenrice said:

We do these things, and many other things, because we like doing them. If all we want is a picture of M31, there are thousands of them we can have in front of us in seconds and for free. But, surely, what we really want is to make the picture ourselves, no?

Olly

There is something special about doing it yourself, but not all agree to the extent that has to be done by themselves. (AP discussions are technical gibberish to most and i would argue most people are put off by it). People who like EAA but not "full" astrophotography are one group who fit into this analogy well, they are still producing an image in the end although with a great decree of automation. Surely they feel like it is their image and not the same as having a look on Astrobin?

Even with smart scopes like the Stellina, EVscope, Dwarf2 and others i think the person orchestrating the image feels it is their own rather than entirely done by machine.

Another analogy would be learning to compose music with a computer using synthetic instruments. You are still learning to create music, but can skip 5 years of guitar lessons.

The meaning of DIY is vague and entirely subjective. I would like to say im in the "want to do it myself" category, but honestly there is no way to tell unless i get to try full automation one day, perhaps i do like it? Progress progresses and we are all left behind, that is for sure.

  • Like 2
Link to comment
Share on other sites

On 16/01/2023 at 21:00, MartinB said:

So the SGL star challenge seems to have stimulated some thoughts about the treatment of stars in astroimages.  So, is star suppression the work of the devil or can stars impair the visual impact of faint nebulosity.  Are the latest star suppression tools being used to take things too far or are they a true blessing.  Obviously there is no correct answer.  Discuss, compare and contrast...

Personally, I prefer images to be as true a representation of reality as possible. I fear that some people will be given a stick to beat the hobby when it strays too far in the direction of Art rather than images of what's actually there. That being said, there is always room for Artistic representation of a particular object, but just so long as the author/creator explicitly confirms it as such. That's my own personal take on the matter.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.