Jump to content

548140465_Animationchallenge.jpg.32379dfa6f3bf4bba537689690df680e.jpg

Pelican Nebula - OSC ASI2600MC - kinda 2nd light.


 Share

Recommended Posts

My second serious attempt to use my new ASI2600MC OSC was 4 hours on the  Pelican Nebula.  I’m not quite sure what to think of it. I often feel like that after processing. I’m OK through the pre-processing stage - calibrating and integrating etc - and the first stages of linear processing because it feels like a pretty objective process. Once I get into the non-linear stages in Pixinsight I feel at sea because it seems to me to be a very subjective process with so many options. It’s quite good data. I’ve done hardly any noise reduction. I haven’t reduced the stars either. I’m always in two minds about doing that anyway. Stars are there and the camera captures them. So leave ‘em be is my thinking.   General comments and suggestions would be welcome. 
 

PS Does anyone know what the object is ringed in the second photo?
 

45FA4D96-C635-48AA-BC35-3FF95B7BF7E3.jpeg

45FBEDE9-D655-4186-BDD7-C80A9D822E6E.jpeg

  • Like 6
Link to comment
Share on other sites

7 minutes ago, Ouroboros said:

I’m OK through the pre-processing stage - calibrating and integrating etc - and the first stages of linear processing because it feels like a pretty objective process. Once I get into the non-linear stages in Pixinsight I feel at sea because it seems to me to be a very subjective process with so many options

Totally agree with this, I enjoy the first bits, then it becomes too subjective

  • Like 3
Link to comment
Share on other sites

4 minutes ago, tooth_dr said:

Totally agree with this, I enjoy the first bits, then it becomes too subjective

I use  Warren Keller’s Inside Pixinsight to guide my work flow.  I apply each suggested process and almost always find it’s over the top, which I don’t like, so I spend ages backing it off and masking until the change is almost imperceptible. Amongst the few processes  that really seem to improve pics (for me anyway) are curves, local histogram equalisation and colour saturation. 

  • Like 2
Link to comment
Share on other sites

14 minutes ago, Ouroboros said:

I use  Warren Keller’s Inside Pixinsight to guide my work flow.  I apply each suggested process and almost always find it’s over the top, which I don’t like, so I spend ages backing it off and masking until the change is almost imperceptible. Amongst the few processes  that really seem to improve pics (for me anyway) are curves, local histogram equalisation and colour saturation. 

Bought that guide too, still using photoshop. Going to make a concerted effort to get onto it this season

  • Like 2
Link to comment
Share on other sites

7 hours ago, Ouroboros said:

Once I get into the non-linear stages in Pixinsight I feel at sea because it seems to me to be a very subjective process with so many options.

When I first started with DSO's (about a year ago, so basically still am 'getting started'), I really didn't enjoy processing at all - I found capture much more fun - but I find myself really starting to enjoy it now. I think downloading some of the IKI data helped; nothing like some really good quality data to practice and try new things in processing!

That said though, I still hate colour balancing (in broadband). I never seem to get it right the first 2 or 3 times, so getting an image out there takes me a while. Case in point: I have an Andromeda image from about a month ago I'm still working on because l just can't get the colours right!

Edit: thinking about it, I also have a Pelican nebula from about July (I think) which I've sort of given up on because it just keeps coming out too purple.

Edited by The Lazy Astronomer
  • Like 2
Link to comment
Share on other sites

@The Lazy Astronomer What are the right colours though? I agree with you though. Getting the colours look right is difficult. Because of that difficulty I tend to allow photo metric calibration decide what the correct star colours are and then apply that colour balance to the image.  That seems to work.  To my eye anyway. I don’t like over saturated colours. 

  • Like 2
Link to comment
Share on other sites

10 hours ago, Ouroboros said:

Once I get into the non-linear stages in Pixinsight I feel at sea because it seems to me to be a very subjective

Hi

For precisely this reason, we gladly allowed our pi trial to expire. Removing the distinction between linear and non-linear data by moving to StarTools was a breath of fresh air. 

Of course the dark art remains, but is far more under your control.

In the end I suppose it's with what you're most familiar which counts.

Oh, and lovely pelicans:)

Cheers

Edited by alacant
Translation
  • Like 1
Link to comment
Share on other sites

I think  all of this does take some getting your head round and when I first started astrophotography not so long ago (around 4 years next spring) I did what everyone does (I imagine anyway) search google and blindly follow some of the many tutorials available and, in my case using PI tried to emulate some of the images I had seen on many websites, using my data.
Of course I never really matched the quality of other images but as I was a novice by any standards I was still happy to produce any sort of recognisable image.
But I too did not feel happy in the fact that if I did enough processes, or if  NB images I mapped the NB filters mapped to different colours I could  almost create an image with whatever colours and intensities I wanted. I was starting to feel a fraud because it felt like I was creating images, almost like an artist, rather than taking images with a camera.

I think the first thing (for me) to realise is that yes we are taking true images of objects that are out there, but certainly by eye alone we cannot see them (other than the odd blob if one of the brighter objects) and even with most telescopes visually they are not so obvious. So in order to see the full extent of the DSO we are imaging the image has to be non linear and we have to accept it is not really a true image as such in that even an image that has not been overstretched and colourised is still heavily processed to be able to see all the nebulosity, or all the dust in a galaxy.

The second thing (again for me) was that although the colours in NB images are not true colours but are there so that our eyes can at least see some of the different emissions of Nebula that we otherwise could not see.

And I am now happy with this, but I still wrestle with not being tempted to over process the images or make the colours too bright, but in my eyes to try to make a more natural image (hard when they are anything but natural because of what we have done to them). 

Now due to one thing and another, mainly the UK weather, my imaging time this year has so far been pretty poor but I have taken steps to learn PI better and more than that try to understand what I am doing with these processes.

The pre processing is actually pretty easy and straightforward as it it pretty much the same for everybody and for all data. There is a clear science behind it, in fact its all pure maths, so I also now have a reasonable understanding of what we are doing.
Doing all the steps separately to start with was really good for me as it also helped me to understand all the steps and what we were doing.
But now for anyone with PI the "WeightedBatchPreProcessing" script just does it all and does it very well, much much improved from the earlier scripts.
I do not think there is now any advantage to doing the re processing yourself using separate processes manually, just load all the files (and in fact just point the script to the directory that holds all the data and it then sorts out into different filters, exposure times etc itself. Check everything is there and let your computer do the work.

The alignment & integration also is pretty straightforward.

Its the bit after that for me that is not well understood, and yes I can carry on blindly following tutorials but would like to know more about what it is I am doing.
With this in mind I am currently going though the many fundamental instruction videos / tutorials from Adam Block Studios, whatever you want to call them. I did um and ah about it as they do not come cheap but glad I did. There is a lot of material to watch so not even 1/4 of the way through them yet but so much more now makes sense to me and also learnt many thinks I was not aware you could very easily do in PI.
When you see what you can do in PI you see what an amazing bit of software it is and I am not there yet by a long way but I also think the more you know what you are actually manipulating with the post processing processes then you feel more at ease doing it and also maybe know more how much to do.
Also I think doing the post processing becomes more natural and so ends up being much quicker to achieve a good image and so (hopefully) the post processing becomes less of a chore that it seems to be when you just follow somebody else's instructions without really knowing whats occuring.

 Sorry for the long post and hope its not all just waffle of a complete novice but I see so many people that do not seem to get on with PI or struggle with it (as I have done) but the more I see it is an incredible tool, or a big set of tools, unfortunately is poorly documented with nothing really (as yet) as a true manual, only books about it made by 3rd parties, but some of them very good its just hard to keep up with how PI continues to develop.
I know there are many other tools out there as well but having paid what is not a small amount of cash for PI and then invested even more in books about it and now these Adam Block instructional videos I am determined to get reasonably proficient with it.

And as @The Lazy Astronomer says being able to download the IKI data this year has really helped me especially with the lack of my own so far.

Steve

 

Edited by teoria_del_big_bang
  • Like 5
Link to comment
Share on other sites

3 hours ago, Ouroboros said:

@The Lazy Astronomer What are the right colours though? I agree with you though. Getting the colours look right is difficult. Because of that difficulty I tend to allow photo metric calibration decide what the correct star colours are and then apply that colour balance to the image.  That seems to work.  To my eye anyway. I don’t like over saturated colours. 

@vlaiv can tell you what the 'wrong' colours are! 😁

Honestly though, I believe him. If someone was to write a program which was able to tell me the correct colour balance for a broadband image, to accurately represent a given DSO, I would happily throw fistfulls of dinars at them (hint, hint 😉).

Link to comment
Share on other sites

10 hours ago, Ouroboros said:

. Not sure I want to learn a new application.

if you're happy with what you have and have the time to spend with it, then absolutely no need to change. But as you'd isolated your main issue, I thought I'd point out that these days, FWOABW 'the point of no return' really can be effectively avoided.

cheers

  • Like 2
Link to comment
Share on other sites

10 hours ago, The Lazy Astronomer said:

@vlaiv can tell you what the 'wrong' colours are! 😁

Honestly though, I believe him. If someone was to write a program which was able to tell me the correct colour balance for a broadband image, to accurately represent a given DSO, I would happily throw fistfulls of dinars at them (hint, hint 😉).

 I assume that’s rather what photometric calibration more-or-less does. Astronomers have measured the spectral characteristics of various stars.  My understanding is that their calibration is applied to the colour balance of our images so the stars in them have the same colours. It has the advantage that the choice of colour balance is based in scientific measurement rather than being arbitrary.  To my eye  at least images after photometric calibration look about right, although the colours are quite often subdued. That can be adjusted later with a little more saturation if desired. These remarks are perhaps most relevant to pictures of galaxies rather than the pelican which is a hydrogen emission nebula.  

Edited by Ouroboros
Link to comment
Share on other sites

10 hours ago, Ouroboros said:

Ta. Not sure I want to learn a new application. I’ve already changed horses once from PS to PI. 

And a little bit how I felt after buying into PI, and to be honest I really struggled with PS and maybe because I did not give PS much of a chance before going down the PI route it helped me.
From the little I used PS it does seem that the sort of Ethos behind them (if that's the correct phrase - probably not)  are totally different and you have to drive them both in very different ways.
And I am guessing that that is the main reason many who have used PS, for a long time really do not seem to take to PI at all.

Although, PI is a bit bewildering having got into it and now starting to understand more and more of the processes, I really do like it and the way it is driven with dragging instances onto desktop and the way it initiatives processes really seems logical to me.

But to use it proficiently yes I agree you need to put a lot into it but I think after a while it does actually become quite enjoyable when the penny starts to drop and you start to understand some of the things you are doing.
Basically I have put a fair bit of money and my time into PI, it is now doing what I want and I am determined to become reasonably proficient at it if it kills me 🙂 

Steve
 

Link to comment
Share on other sites

10 hours ago, The Lazy Astronomer said:

@vlaiv can tell you what the 'wrong' colours are! 😁

Honestly though, I believe him. If someone was to write a program which was able to tell me the correct colour balance for a broadband image, to accurately represent a given DSO, I would happily throw fistfulls of dinars at them (hint, hint 😉).

Not only that - I can also tell what the "right" colors are :D

Problem is that software solution alone won't produce best results. You also need to add "hardware" part to get correct color. Camera needs to be calibrated to produce accurate colors (similarly to how displays need to be calibrated to be accurate) and that requires shooting some sort of color chart and having calibrated source.

I'm planning to do small utility to help with that (generate different calibration targets and reading off results from raw data), but it is matter of available time (which I hope to have a bit more in near future).

  • Like 1
  • Thanks 1
Link to comment
Share on other sites

8 minutes ago, vlaiv said:

Problem is that software solution alone won't produce best results. You also need to add "hardware" part to get correct color. Camera needs to be calibrated to produce accurate colors (similarly to how displays need to be calibrated to be accurate) and that requires shooting some sort of color chart and having calibrated source.

I'm planning to do small utility to help with that (generate different calibration targets and reading off results from raw data), but it is matter of available time (which I hope to have a bit more in near future).

Interesting I look forward to this if you do get the time.
I know full well what you mean about the monitors though after spending ages processing on my Laptop (with 2nd monitor) and then posting my image on SGL only to see it on my desktop monitors. I soon bought a Spyder 3 to calibrate all my monitors the same.

Steve

  • Like 1
Link to comment
Share on other sites

8 minutes ago, Ouroboros said:

 I assume that’s rather what photometric calibration more-or-less does. Astronomers have measured the spectral characteristics of various stars.  My understanding is that their calibration is applied to the colour balance of our images so the stars in them have the same colours. It has the advantage that the choice of colour balance is based in scientific measurement rather than being arbitrary.  To my eye  at least images after photometric calibration look about right, although the colours are quite often subdued. That can be adjusted later with a little more saturation if desired. These remarks are perhaps most relevant to pictures of galaxies rather than the pelican which is a hydrogen emission nebula.  

Problem with phorometric color calibration is two fold.

1. It's not implemented properly (PixInsight), or is implemented partially (Siril)

2. Stellar sources represent rather narrow part of visible gamut and can be used to correct for atmospheric influence, but won't produce very good results for complete calibration

In fact, the way Siril implements it is adequate for atmospheric correction if you already have color calibrated data to begin with

Another problem is that we need paradigm shift in the way we process our images. Classic RGB model that most people think in when trying to process images - is suited for content creation, but not for accurate color reproduction. Most software that has color management features is geared towards following goal:

Make colors be perceived the same between content creator and content consumer. In another words - if I "design" something on computer under one lighting conditions and then image is printed and put in another lighting conditions in living room - or perhaps object was died and put in kitchen - we want our perception of that color to be the same.

In astrophotography, if we want correct colors - we need to think in terms of physics of light (and that is something most people are not very keen on doing) - and what color would we see if we had that starlight in a box next to our computer. We want color on computer screen to match that color of starlight from the box.

  • Thanks 1
Link to comment
Share on other sites

What a great, thought provoking thread. I am at the start of my imaging oddessy and really struggle to get anything that resembles the brilliant images one sees here. Getting to grips with what the processes do is for me the first task as I don't want to become too mired in generating "artistic" and visually striking images. I know authentic is a bit of a misnomer, but I feel that less is more given what you actually see when looking upwards. Looking forward to getting to understand and use the software more effectively than I am managing ATM!

  • Like 3
Link to comment
Share on other sites

48 minutes ago, vlaiv said:

In astrophotography, if we want correct colors - we need to think in terms of physics of light (and that is something most people are not very keen on doing) - and what color would we see if we had that starlight in a box next to our computer. We want color on computer screen to match that color of starlight from the box.

I think they will always be some people will just want to bring out great colours in their images and that's fine they are your images and totally free to introduce poetic licence for want of a better phrase.
But I would also think that for many it is they do not understand what the true colours are, or certainly know how to achieve them. Apart from the monitors not being calibrated I guess  nobodies eyes are really calibrated and people perceive colours differently.

I for one would like to bring out somewhere the near correct colour in broadband,  but  I think in all honesty, I do not know the correct colour. If anything before getting into astronomy if I were asked the colour of stars I probably would have said 99% of them are white with the odd reddish one.
Now, although I know that is not true I still probably try to set my colours by looking at other (in my mind better) images and then match them, again as I see them on my monitor (which now is colour calibrated I hope and so that is one hurdle I managed to  get over).

@vlaiv during my few years in AP you have helped me on so many threads and cannot thank you enough, but I have to admit , often, I  did not understand fully everything you said, it was just a little beyond me. As time goes by I am understanding more and more, and still worry a bit that when I join in threads like this I am seen as talking nonsense as I do not yet understand a lot of it.
AP is so much more than putting a camera on the end of a scope and obtaining focus and guiding,  in fact there is a pretty laid out format how to do all that and it doesn't take too long for all that to fall into place, something maybe not realized when first getting into it, I certainly didn't 🙂 

And then NB images a whole different bone of contention 🙂 

Steve

Edited by teoria_del_big_bang
  • Like 1
Link to comment
Share on other sites

1 hour ago, alacant said:

if you're happy with what you have and have the time to spend with it, then absolutely no need to change. But as you'd isolated your main issue, I thought I'd point out that these days, FWOABW 'the point of no return' really can be effectively avoided.

cheers

Yes, I see what you’re saying. Until you mentioned it I hadn’t seen the transition from linear to non-linear in the PS workflow as limiting, but as you say I guess it can be seen that way.  Actually, I  merely mentioned non-linear post processing in passing to make it clear that it was some of the processes (not all) from that point forward that I have found difficult to use successfully. 

  • Like 1
Link to comment
Share on other sites

@teoria_del_big_bang Thanks for to your long and thoughtful post. Actually I like Pixinsight. It probably sounded as though I was having a gripe about it in my opening post. I wasn’t. I was criticising myself really.  I like PI’s methodology. Thanks for the pointers to Adam Block’s videos. Yes, they are pricey aren’t they? Not ‘arf! I think I might avoid those for the time being. Also thanks for drawing my attention to the IKI data, which I’m ashamed to say I was only dimly aware of. Is there any OSC data in there do you know?   I couldn’t see any at a quick glance. 

  • Like 1
Link to comment
Share on other sites

On 02/10/2021 at 11:27, Ouroboros said:

@teoria_del_big_bang Thanks for to your long and thoughtful post. Actually I like Pixinsight. It probably sounded as though I was having a gripe about it in my opening post. I wasn’t. I was criticising myself really.  I like PI’s methodology. Thanks for the pointers to Adam Block’s videos. Yes, they are pricey aren’t they? Not ‘arf! I think I might avoid those for the time being. Also thanks for drawing my attention to the IKI data, which I’m ashamed to say I was only dimly aware of. Is there any OSC data in there do you know?   I couldn’t see any at a quick glance. 

I didn't really think you were having a go at PI, just maybe more frustrated, and I get it, I was too.
Regarding the Adam Block videos, yes they do not come cheap, and I put it off for a long while, (don't forget us Yorkshire folk do not part with money easily 🙂 ) but  in the end my logic was that I had spent somewhere in the region of £250 on the software itself  (can't remember now exact how much it was) and then probably another £100  on books on PI and still wasn't totally getting to grips with it so went for it and I think I paid £133 for the Fundamentals this but it has transformed the way I feel about PI (I liked its methodology like yourself before but now have even better feel for it).

The Fundamentals are not actual elementary as I thought before purchasing and I emailed Adam Block after I had seen him on one of the talks on SGL on the weekends that started during lockdown as I was impressed with his talk, and he replied very promptly  saying that they were in no way elementry and that if I bought them and thought they were not for me after a week then he would refund me.

God knows what information is in the more advanced stuff "Pixinsight Horizons" and I thought that after I had thoroughly done with these Fundamental courses (which will be a while as he does keep adding and updating) then maybe I will try the Horizons. At $250, it may not be cheap but from what I have learned in the Fundamentals I am sure it is worth it.

If you do not fancy that I can really highly recommend the book "Mastering Pixinsight" by Rogello Bernal Andreo. The printed book is not cheap as it has to be imported from US but it is also available as a download for a very reasonable cost. This is actually 2 books one taking you through PI and the other is a reference guide for all the individual processes. Rogello also very approachable and contactable via email and he does update the on line version from time to time as PI develops.

Mastering Pixinsight

I am not sure if any OSC data on there but I think the data comes from same setup so I guess it is all mono either RGB or NB.

Steve

Edited by teoria_del_big_bang
  • Like 1
Link to comment
Share on other sites

3 hours ago, teoria_del_big_bang said:

for one would like to bring out somewhere the near correct colour in broadband,  but  I think in all honesty, I do not know the correct colour. If anything before getting into astronomy if I were asked the colour of stars I probably would have said 99% of them are white with the odd reddish one.
Now, although I know that is not true I still probably try to set my colours by looking at other (in my mind better) images and then match them, again as I see them on my monitor (which now is colour calibrated I hope and so that is one hurdle I managed to  get over).

Here is simple test you can do as a start - just to see how good you are with your processing workflow. Since you have calibrated monitor - that simplifies things quite a bit.

Provided that your monitor is sRGB calibrated (6500K white point and 2.2 gamma - or actual sRGB gamma), you can do following test - find any sort of colorful image online and display it on your monitor. Take your camera with lens (this is tricky part as often people don't have lens or way to mount it  - but use guide scope with short FL for example) and take image of your monitor in dark room - or rather of that colorful image displayed on your monitor.

After processing look at both images side by side - original and your own. They should match in color.

You can "spice things up a bit" - and use few short exposures which you'll stack later - that is just to simulate astronomical images and very high dynamic range we get after stacking. After stretching your data - you should again get the same (or very similar) colors.

As a first step - you can actually try following - find again colorful image and display it on your screen - and take your mobile phone and take a photo and compare the two. You'll be surprised how much color matching will be done in phone. If anyone can do this with smart phone - I wonder why we could not do the same in AP.

3 hours ago, teoria_del_big_bang said:

As time goes by I am understanding more and more, and still worry a bit that when I join in threads like this I am seen as talking nonsense as I do not yet understand a lot of it.

Sometimes I feel very similar - I'm afraid that if I'm too technical - people will just dismiss that as being nonsense (and probably is to them) :D.

  • Like 1
  • Thanks 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.