Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

M101 feedback


Recommended Posts

1 hour ago, Pitch Black Skies said:

First attempt with Siril

Nice.

The main issue I have with Siril -and I admit to not having looked much into why- is the detail. Almost certainly my lack of patience though!

image.png.01f79bf563c896cc9148f2fec3eb7794.png  ss_4.png.f3f63f2c2b0952dd1d7452e741e2818f.png

Edited by alacant
  • Like 1
Link to comment
Share on other sites

28 minutes ago, alacant said:

Nice.

The main issue I have with Siril -and I admit to not having looked into this- is the detai. Almost certainly my lack of patience though!

image.png.01f79bf563c896cc9148f2fec3eb7794.png  ss_4.png.f3f63f2c2b0952dd1d7452e741e2818f.png

It looks a lot cleaner when I put it through Topaz, however I only have the trial version. Another goodie for the wish list.

 

image.png

Edited by Pitch Black Skies
  • Like 1
Link to comment
Share on other sites

I like Siril for stacking, but I don't find it particularly "strict" when it comes to excluding marginal images.  I tend to use the grading in ASTAP first and exclude bad images, then stack in Siril.

Still, gotta love M101!  

  • Like 1
Link to comment
Share on other sites

50 minutes ago, alacant said:

Nice.

The main issue I have with Siril -and I admit to not having looked much into why- is the detail. Almost certainly my lack of patience though!

image.png.01f79bf563c896cc9148f2fec3eb7794.png  ss_4.png.f3f63f2c2b0952dd1d7452e741e2818f.png

This is one of startools best parts. One stop shop for processing.

Siril is just a linear processing tool getting you the data stretched and colour calibrated before you move on to the details. You could do deconvolution and wavelet sharpening to the above picture but it wont come out of siril looking like the bottom no mattet what.

Link to comment
Share on other sites

1 hour ago, alacant said:

Nice.

The main issue I have with Siril -and I admit to not having looked much into why- is the detail. Almost certainly my lack of patience though!

image.png.01f79bf563c896cc9148f2fec3eb7794.png  ss_4.png.f3f63f2c2b0952dd1d7452e741e2818f.png

Gimp to the rescue? It has very good tool for sharpening:

image.png.d1dbf5c482206cca1c3aa0a7f5d7f357.png

(this was done on 8bit image, results are even better on 32bit data).

Look up G'mic-qt add-ons, under Detail / Sharpen [Gold-Meinel]

 

Link to comment
Share on other sites

20 minutes ago, vlaiv said:

Gimp

Gimp is nice and the gmic thing gives a vast array of tools, however IMHO, its sharpening can't compete with modern methods of signal processing

If you haven't already, try StarTools' SVDecon;)

Edited by alacant
Link to comment
Share on other sites

3 minutes ago, alacant said:

You obviously haven't tried StarTools' SVDecon;)

No, I tried it, but don't see much value in it personally.

Some people like it, and I'm sure they find it a great tool, however, that does not mean it is only tool capable of processing an image (nor that everyone will like it).

You mentioned that you have issues with detail when processing in Siril - I just offered an alternative - Gimp, that is good at revealing the detail.

Again - you might prefer StarTools in that role, but that does not mean everyone will, and it is good to give people alternative.

 

Link to comment
Share on other sites

1 hour ago, JonCarleton said:

I like Siril for stacking, but I don't find it particularly "strict" when it comes to excluding marginal images.  I tend to use the grading in ASTAP first and exclude bad images, then stack in Siril.

Still, gotta love M101!  

You can use the plot tab to pick your subs!

After registering your calibrated frames the plot tab will have a graph of all your subs and their measured fwhm and roundness. Tick off the ones that are clear outliers and you will have a much sharper stack afterwards.

You can also draw a selection around a star and run the PSF for sequence tool which creates a plot of many more things like SNR, background levels, magnitude (relative) and more. With this you can deselect subs that had low SNR or high background levels (maybe a passing cloud or neighbours lights etc). 

Automatic quality estimates are in my opinion not that good and the extra effort manually weeding out the bad subs is worth it.

  • Like 1
Link to comment
Share on other sites

My attempt, a fast 10-min processing: photometric calibration in Siril, then all the rest in PS. Stretched using asinh10 curves, then just the camera raw filter for clarity and denoise (I could have smoothed it further, looking at the jpeg).

m101.thumb.jpg.5fccaf2dc5b3c86d86cfc19db86a7c59.jpg

 

  • Like 1
Link to comment
Share on other sites

3 minutes ago, alacant said:

Mmm. Did you use Gimp sharpening on your image?

Yes.

I usually apply "masked" sharpening (use luminance of the image as mask so that I blend in sharpened image only were it is bright), but this time I did not bother as background is not that noisy.

Link to comment
Share on other sites

1 hour ago, Pitch Black Skies said:

It looks a lot cleaner when I put it through Topaz, however I only have the trial version. Another goodie for the wish list.

 

image.png

keep your eye out for deals. they have one out just now, but it's just average. I got the whole suite at xmas for I think 60 quid.

It's definately the best bit of software I've gotten myself in the last year. I don't use sharpen much, mainly the denoise (which also can sharpen) and gigapixel - which is great for enlarging or reducing.

Oh, and they just updated gigapixel to mac M1 - it is now screaminly fast wheras it used to take maybe 30 seconds to preview an image on my Intel i9 8 core/16 thread system, it now takes about 5 seconds on my wee apple mac mini. gotta love M1s - even those mac haters surely... 😛

Can't say I've ever been a fan of gimp I'm afraid - using it feels like I've time travelled back to the late 90s.

stu

  • Like 1
Link to comment
Share on other sites

Unfortunately, there is much misinformation here - from the usual suspect - about colouring.

Citing a Hubble narrowband composite (acquired with F435W (B), F555W (V)), and F814W (I)) filters) as a visual spectrum reference image is all you need to know about the validity of the "advice" being dispensed here.

In ST, the Color module starts you off in the Color constancy mode, so that - even if you don't prefer this mode - you can sanity check your color against known features and processes;

A visual spectrum image of a close by spiral galaxy should reveal a yellow-ish core (less star formation due to gas depletion = only older stars left), bluer outer rim (more star formation), red/brown dust lanes (white-ish light filtered by dust grains) and purple/pinkish HII areas dotted around (red Ha+other blue Balmer series, some O-III = pink). Foreground stars should exhibit a good random selection of the full black body radiation curve. You should be able to easily distinguish red, orange, yellow, white and blue stars in roughly equal numbers (provided the field is wide enough and we're not talking about any sort of associated/bound body of stars).

You are totally free (and able!) to achieve any look you wish in StarTools (provided your wishes follows physics and sound signal processing practices). For example, if you prefer the old-school (and decidedly less informative) desaturated-highlight look of many simpler apps, it is literally a single click in the Color module on the "Legacy" preset button.

With this dataset in particular, you do need to take care your color balance is correct, as the default colour balance comes out as too green (use the MaxRGB mode to find green dominance, so you can balance it out). It's - again - a single click on a green-dominant area to eliminate this green bias.

Detailed help with all of the above can be found in the manual, website and user notes created by the community on the forums. The Color module documentation even uses M101 to demonstrate how to properly calibrate in StarTools and what to look out for.

StarTools may be easy to get to grips with quick on a basic level. However, like any other software, learning what the software does (and preferably how it does it) will be necessary to bend it to your will; every module is utterly configurable to your tastes (again, as long as actual recorded signal is respected).

The other important aspect of image processing, is using a reference screen that is properly calibrated, both in the color and brightness domain. If you can see a "vaseline"-like appearance of the background after noise reduction, then your screen is set too bright. If it seems the background is pitch black, it is set too dark.

ST is specifically designed to use the full dynamic range it assumes is at its disposal on an assumed correctly calibrated screen, with an assumed correct tapering off of the brightness response. (even then, you can add back some Equalized Grain if you really must and don't like the - normally invisible - softness). Unfortunately, there is a shocking amount of people who have never calibrated their screens and just assume that what they see is what someone else will see. :(

It doesn't matter which software you use; if your screen is poorly calibrated (too bright or too dim), your will make incorrect decisions and/or fret about things that are not visible on a truly calibrated screen (which, mind you, closely correlates to the average screen you can expect your audience to view your work on!). When in doubt, check your final image on as many screens as you can get your hands on. Better yet, add a second (or third) screen to your image processing rig if practical.

Finally, one last word of caution on the use of software that neurally hallucinates detail, like Topaz AI Sharpen or Denoise; using these "tools" is a bridge to far for many APers, as the "detail" that was added or "shaped" was never recorded and, while looking plausible to a layperson, is usually easily picked up on as a deep fake by your peers with a little more experience.

I hope any of this helps!

  • Like 4
Link to comment
Share on other sites

10 minutes ago, jager945 said:

Unfortunately, there is much misinformation here - from the usual suspect - about colouring.

Citing a Hubble narrowband composite (acquired with F435W (B), F555W (V)), and F814W (I)) filters) as a visual spectrum reference image is all you need to know about the validity of the "advice" being dispensed here.

These are not narrow band filters

image.png.99ed478fd92aed4934c084239ed9680c.png

image.png.4ac907cce9c16e8d35890174551d6bed.png

image.png.d37e6b51779b90674edc7aa7e1876be6.png

  • Like 1
Link to comment
Share on other sites

All good stuff. All I'd say is that all my screens are calibrated with Spider 3s and denoise in ST always leaves fuzzy backgrounds for me. Just prefer other methods to denoise. I'd love to see a lot more youtube tutorials on startools. Like many people, I learn far better from watching someone do stuff that reading a manual. There are a fair few on there, but not as many as I'd like to see, and too many using wonderful data that most of us just don't have.

And yup. Topaz can be taken too far. Sometimes that might be what you want, sometimes not. It's not a bad thing it makes stuff up - you just need to watch for it and decide if you want that or not. Some folk don't, some do. No one is right except the person doing the processing - we're not shooting images for nasa*

Any plans to include starnet2++ support ?

stu

*well unless you are trying for APOD, then probably a good idea to lay off the AI created detail I grant you...

 

Link to comment
Share on other sites

17 minutes ago, jager945 said:

You are totally free (and able!) to achieve any look you wish in StarTools (provided your wishes follows physics and sound signal processing practices).

ah - that's an important bit I think. I don't want to be  constained by physics or sound signal processing practices. I'm happy to do whatever I feel like. Not a criticism, but I do think that hits the nail on the head for why I usually use Affinity Photo for at least some processing. Just like when I'm editing my landscape photos, I'm quite happy removing someone from the background, changing the colour of the sky, or making a foreground object bigger than it is. The same rules apply to astrophotography for me - what we do (well what I like to think I do) is create art - it's not science. So I'm fine with sometimes startools doing a bit, then moving it into affinity photo to take out the stars and replace them, add Ha, do whatever I feel like. It's not that I expect Startools to do that - it is constained by the requirements you've imposed on it - which is 100% ok. 😁👍

Link to comment
Share on other sites

7 hours ago, Pitch Black Skies said:

First attempt with Siril

677065787_M10124hr5mnSiril.thumb.png.3cac447c4c47be5005ad258265fe682f.png

Looks much more natural to me than your other attempts.

I took this image into Darktable, sharpened with Highpass and Contras Equalizer and saturated the colors:

image.thumb.jpeg.64175148260a253ec2d0f1c43a085dcd.jpeg

 

Regarding Topaz: i do not use that software, because as i understand it, it doesnt do "simple" math functions on your image, to enhance what IS already there. It makes educational guesses to clean the image up, based on millions of images the neural network got fed with. Even if it looks natural, it is not.

But thats preference. Some people are more into the artistical aspect of the hobby. You will find out for yourself, where you want to draw a line in post processing.

For me its much more satisfying if i show only the data i collected with my gear, even if it might be a less clean look. 

 

Quote

ah - that's an important bit I think. I don't want to be  constained by physics or sound signal processing practices. I'm happy to do whatever I feel like. 

@powerlord Well you could also just use Hubble images as luminance layer and go for it 😆 

 

 

 

Edited by Bibabutzemann
  • Like 2
  • Thanks 1
Link to comment
Share on other sites

14 minutes ago, Bibabutzemann said:

Looks much more natural to me than your other attempts.

I took this image into Darktable, sharpened with Highpass and Contras Equalizer and saturated the colors:

image.thumb.jpeg.64175148260a253ec2d0f1c43a085dcd.jpeg

 

Regarding Topaz: i do not use that software, because as i understand it, it doesnt do "simple" math functions on your image, to enhance what IS already there. It makes educational guesses to clean the image up, based on millions of images the neural network got fed with. Even if it looks natural, it is not.

But thats preference and i wouldnt judge. Some people are more into the artistical aspect of the hobby. You will find out for yourself, where you want to draw a line in post processing.

For me its much more satisfying if i show only the data i collected with my gear, even if it might be a less clean look. 

yup, correct. Same with starnet2++. As you say, each to their own.

But even if you stick to working with the data you have got  - I would  slightly moan about the use of the word 'natural' - as I don't really think it has any meaning here. 'natural' is that you don't see it at all after all ! There is a heck of a lot of data in there you are not using, especially in the core.I don't think the argument that it's natural to not show that detail stands up any more than it being natural TO show it. Both are equally wrong/right. Is it 'natural' if you take a photo inside looking out of a window that all you see is a white square ? that's what you recorded, but it's not 'natural'.. it's just what you recorded... your eyes see out the window fine. so maybe you HDR the image. It's just as much 'natural' then as before, because the word isn't really meanful imho. The definition of natural is yours alone really. So, in the end I'd argue your processing 'suffers' just as much of an artistic input as someone who tries to bring out every piece of data in their processing. One is no more 'natural' than the other imho. However it does seem to be a prevalent belief that it is the case for some reason in the astrophotography community.

stu

Link to comment
Share on other sites

2 minutes ago, powerlord said:

yup, correct. Same with starnet2++. As you say, each to their own.

But even if you stick to working with the data you have got  - I would  slightly moan about the use of the word 'natural' - as I don't really think it has any meaning here. 'natural' is that you don't see it at all after all ! There is a heck of a lot of data in there you are not using, especially in the core.I don't think the argument that it's natural to not show that detail stands up any more than it being natural TO show it. Both are equally wrong/right. Is it 'natural' if you take a photo inside looking out of a window that all you see is a white square ? that's what you recorded, but it's not 'natural'.. it's just what you recorded... your eyes see out the window fine. so maybe you HDR the image. It's just as much 'natural' then as before, because the word isn't really meanful imho. The definition of natural is yours alone really. So, in the end I'd argue your processing 'suffers' just as much of an artistic input as someone who tries to bring out every piece of data in their processing. One is no more 'natural' than the other imho. However it does seem to be a prevalent belief that it is the case for some reason in the astrophotography community.

stu

I would say that there are several things that will make image less natural.

1. Adding features or removing them

2. Changing of order of things - like if A >= B in brightness - I expect that to remain so in the image

3. I personally see color as being one of the things that should be kept as similar as possible to the real world.

Best judge of whether image is "natural" or not - should be, similar to measuring something (after all, taking image is form of measurement) - repeatability.

If many people shoot same galaxy and they all render a star or a feature in their images - then yes, that feature is there and it is natural to be shown in the image. Don't confuse poor measurement to "natural". If I shoot image from the inside and get white image or something - how is it different to taking image with too long exposure that saturates my sensor - or using ruler that is dirty enough so I can't read off correct number.

I would also like to add - that I don't object people being creative in their "photographs" - but I do object them being called photograph. Term it differently - term it "astro art" instead.

People in general are used to concept of photograph as being something realistic - depicting things as they are. When you take astro photo and then let your artistic side take over - you are doing disservice to the public by posting your work as astrophotography rather than astro art.

Link to comment
Share on other sites

Astrophotographers are somewhere on a spectrum that spans between astronomy as a science, trying to uncover facts from limited data and photography as an art, trying to create the most pleasing image with whatever means available.

It IS up to the person doing the imaging and processing to decide what is right and what is not, but also up to the viewer to decide if a process went too far into fantasy from their perspective.

  • Like 1
Link to comment
Share on other sites

1 minute ago, ONIKKINEN said:

It IS up to the person doing the imaging and processing to decide what is right and what is not, but also up to the viewer to decide if a process went too far into fantasy from their perspective.

I'm not sure I agree with this part.

How can person, observing an image of object for the first time - decide if something is real or not?

Imagine someone looking at the image of platypus for the first time - never hearing about the animal before.

image.png.89fdc8550a5a3fb79eba8daba7798d43.png

Hairy thing with duck like beak and Interdigital webbing, come on, really? That's a thing and not photoshopped?

  • Haha 1
Link to comment
Share on other sites

Here's my effort in StarTools. I always use Film Dev for my post Wipe stretch.

My process here was 1. AutoDev 2. Bin 3. Crop  4. Wipe 5. Film Dev to 95.34% with Skyglow set to 5%  6. HDR  7. Sharp   8. Decon   9. Colour - clicked on outer core to reduce green and reduced percentage to 143% on "constancy" colour setting. 10. Noise Reduction set to 2 pixels 

M101 19 hr 10 mn calibrated2.jpg

  • Like 3
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.