Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

Done with being cheap. Pix advice


Recommended Posts

11 hours ago, ollypenrice said:

So to whom do the authors of this menu think they are talking?

To those who try big numbers and small numbers and see what happens? Same as PS I think. A Feather of 2 is...? Good ? Bad? 

  • Like 1
Link to comment
Share on other sites

22 minutes ago, Viktiste said:

To those who try big numbers and small numbers and see what happens? Same as PS I think. A Feather of 2 is...? Good ? Bad? 

In Ps I'm offered the chance to expand a selection by x pixels and then feather it by whatever I like, x or another value. I don't need to be a mathematical genius to understand that, if I expand it by 100 pixels and then feather it by 100 pixels the selection will be expanded and faded by that amount. What is more, when I ask Ps to expand by 100 pixels the selection's expansion is shown on the image in real time so I can see what it means. That's perfect: if it's too big I just reduce it, or vice versa.  The big numbers and small numbers you mention are presented visually on the screen so I can see what they mean.  Photoshop's genius lies in its communicative power.

I think that PI's user interface could be edited by someone who understood communication but, as it stands, it is a communication disaster. It is inexcusably bad and, frankly, strikes me as being proud of the fact. PI's explanations are absolutely perfect for those who already understand what it is doing and don't need it explaining. In this respect it closely resembles the kind of explanations which abound in the autistic world of IT.

Olly

 

  • Like 3
  • Haha 1
Link to comment
Share on other sites

12 hours ago, ollypenrice said:

I'm always intrigued by these comparisons between the relative complexity of Photoshop and Pixinsight and there is no right or wrong answer since, if you find one easier than the other, then that's it, you just do.

I knew nothing whatever about image processing or any kind of digital photography when I started taking astrophotos but, back then, post processing really was an almost universally 'Photoshop activity.' Nearly all the available tutorials were for Ps, as were the bought-in actions.  I think the reason for the irresistible rise of Photoshop (which is now a commonly used verb in anglophone countries) derives from its user interface which is largely based on metaphors drawn from film photography and printing.  Unsharp masking, dodge and burn were darkroom techniques, layers come from printing, the eraser from draughtsmanship and so on. These metaphors clicked with me, intellectually, and made me feel at home - though a little overwhelmed at first. However, the consistency of the underlying logic was reassuring. Compare that with this randomly chosen bit of Pixinsight menu:

PI.JPG.7af498377b155e9a5c62c4558f41f5a9.JPG

What does this mean?  Continuity order of 2?  If you understand the mathematics behind image manipulation then fine, this will speak to you. But how many imagers are in this position? So to whom do the authors of this menu think they are talking? If they are well entrenched on the Asperger's continuum they won't care...

However, terms like opacity, feather, erase, select, minimize, maximize, etc etc, though used metaphorically to describe mathematical manipulations, make intuitive sense to me and create an analogue processing experience.

Olly

Olly as you say there is no right and wrong, but there are always two sides to things.
I know you are not a fan of PI and you have explained a few times in various threads to why you do not get on with it and I understand.
But I like many really like PI and I have also said before I think when you understand the way it works and the penny drops then it is not as difficult as originally perceived.
I do think as well that the example you give above is not really fair because all because whilst I agree that at first sight the parameter names may not seem understandable if the whole process were displayed and you knew what the process was they would probably make more sense, and on top of that if you hover over any parameter then quite an extensive explanation appears. 
I think you already know from previous threads that I have had almost the opposite experience whereby I could not understand photoshop at all and some of that may come from having very little previous knowledge of photography before getting into AP, and whilst I am not saying PI came easy to begin with it came whereas I still cannot drive PS, but then again I have spent a fair bit of time with PI and far less with PS.

I am not saying that some things about PI are difficult and the fact that they continually develop and improve it as time goes by is great as you get the upgrades included in the initial outlay, but it is also a bit of a double edged sword because yes its great they improve the program, but it means you can never stop learning it and also many of the older tutorials become out of date, yes they still work as the old processes are still there but they may not be showing the best way to do at the present time.

But whilst we are in different camps regarding processing software as I say I agree no right and wrong, it all depends what you can get your head around and feel comfortable with 🙂 

Steve


 
 

  • Like 3
Link to comment
Share on other sites

And sorry @Anthonyexmouth your thread had wandered a bit not helped by me, you have bought PI already and not for us to argue whether that was a good decision or not you asked for help regarding plugins, scrips and tutorials.

I think mostly the most needed scripts get loaded with the download of PI but I will check what I have later and let you know , tutorials I would recommend looking at Adams Blocks tutorials, not all are free though.

Steve

  • Like 3
Link to comment
Share on other sites

On 13/05/2022 at 17:08, Anthonyexmouth said:

I not saying it's easier or intuitive but APP was driving me nuts. What I like about pi is it's very modular and the processes are easier to compartmentalise in my head. I can pull out a few and work through them. 

Why was APP driving you nuts exactly?

Link to comment
Share on other sites

I think there are two things to master in processing, whatever you use to do it in, one of them pretty obvious and the other less so.

1 - Our processing software contains a bag of pixel-modifying tools. We need to know what they are and how to use them. That's the obvious bit, though thinking about the program that way might help to bring understanding.

2 - Much less obviously, we need to become skilled in looking at our pictures.  This ability is one of the big things we learn as we gain experience. As beginners we'll post images with defects that we haven't fixed, not because we couldn't have fixed them but because we haven't seen them. At one time I had a checklist to help with not missing things because I hadn't seen them. (Background sky colour? Background sky brightness? Star colour? Green cast? Faintest signal fully exploited?  Histogram goes all the way from black point to white point? Can the data be pushed still further?)

 

Olly

  • Like 1
  • Thanks 1
Link to comment
Share on other sites

Some good points there, Olly.  My experience with PI is that I’m OK with it up to the transition from linear to non-linear processing. Up to that transition I’m pretty much working through a sequence of fairly well defined tasks. After the transition to nonlinear it feels like I’m in a multidimensional possibility space. It all comes a lot more subjective. I often feel I’m not sure what the problems are with my image, and even less sure how to set about solving them. I sort of bumble about in this multidimensional space and usually end up with something presentable; and I am getting better at it.   But I often feel a tad dissatisfied with the result. Someone earlier said something about not enjoying processing. I think I might be acquiring a dislike for it too. I’ve got data sitting on my hard drive from last winter waiting for me to ‘do’.   

  • Like 1
Link to comment
Share on other sites

54 minutes ago, Ouroboros said:

Some good points there, Olly.  My experience with PI is that I’m OK with it up to the transition from linear to non-linear processing. Up to that transition I’m pretty much working through a sequence of fairly well defined tasks. After the transition to nonlinear it feels like I’m in a multidimensional possibility space. It all comes a lot more subjective. I often feel I’m not sure what the problems are with my image, and even less sure how to set about solving them. I sort of bumble about in this multidimensional space and usually end up with something presentable; and I am getting better at it.   But I often feel a tad dissatisfied with the result. Someone earlier said something about not enjoying processing. I think I might be acquiring a dislike for it too. I’ve got data sitting on my hard drive from last winter waiting for me to ‘do’.   

Might it be worth putting some time into understanding the principles of the nonlinear tools rather than experimenting with them practically? I know there's a temptation to open an image and push sliders this way and that to see if they can give us something we like, but this can't be the way to do it. (Lots of U-tube gurus seem to think it is, of course. 'Flounder along with me,' seems to be their motto.)

One thing I would say about processing, though, is that with only a few exceptions it is best to take small steps. Big improvements, for me, are made from lots of tiny improvements.

Olly

 

  • Like 1
Link to comment
Share on other sites

1 hour ago, Ouroboros said:

 I’m OK with it up to the transition from linear to non-linear processing. Up to that transition I’m pretty much working through a sequence of fairly well defined tasks. After the transition to nonlinear it feels like I’m in a multidimensional possibility space. It all comes a lot more subjective. I

I think with processing that is the stumbling block or hurdle for everyone. But it is a hurdle you can get over, for sure.

Whilst in the linear stage really there is pretty much a well defined path of what to do and how to do it and worth any software worth it's salt you can watch one or two tutorials, or read a book on it and after a short while you can just do it on your own. 
Pretty much the steps are laid out what to do with very few decisions to what to do, the aim is pretty clear to end up with all your frames aligned, integrated with anomalies such as Sat trails, dead pixels, dust bunnies etc removed. Yes there are a few things to make decisions on such as the integration method and how you remove the dead pixels, and so on, but these are not difficult to learn.

But, the non linear, that is where you can create almost whatever you want, but what is right ?
You can end up with a terrific representation of your target (after all in some way it is a representation as it does not appear naturally to our eyes like that otherwise no need to stretch it in the first place,  also if in NB the colours aren't even true at all) or with bad processing you can end up with a poor washed out image missing lots of detail, or go mad and its awful with too much contrast, crazy colours and stuff in there that actually is not there.

As I am writing this Olly has replied with more or less what I was going to say, but he is far more concise, and probably he is like that with his processing and why his images end up so great as he does just enough to bring the image to view without spending hours tweaking here and there and eventually taking the processing too far and ruining the image altogether (mind you a nice dark sky also helps 🙂 ).
I think one of Olly's bits of advice, if I remember rightly,is to be very gentle on any noise reduction, and watching many tutorials I think many people spend a lot of time on noise reduction and often remove stuff that should be there.

So for sure as  Olly has said a lot of small steps is far better than few big steps, and as I am learning now, mostly by watching Adam Blocks tutorials, learning what you are doing to those pixels is the key. If you follow tutorials religiously and just enter parameters shown in the tutorial then you learn nothing and end up having to follow the tutorial every time you process a new image. 
Also don't want to end up putting in parameters willy Nilly hoping that will  magically do what you want.
I think that's when it does become a chore and you begin to hate processing.
Also with the small steps I would say regularly comparing before and after images after applying some process is a good practice and do not be afraid to undo and go back and try again, don't just carry on and try to re-correct by applying another process.

I think you need to enjoy processing, it is such a big part of AP, otherwise what are you doing that makes the hobby a joy to do, setting up a scope and camera, picking a target, then telling the software to take loads of images and going to bed, I mean these days the imaging software does pretty much everything for you, even focussing every so often, so a good setup should be capable of getting some good data depending on the sky quality, I think the processing of that data is what makes that image your image and a unique image.

I am not saying this as any expert at all, far from it, I still have a way to go and currently with weather and lack of spare time so far this year getting data is my issue, although can always use somebodies data, such as the data from the IKI competitions on SGL is a great way to practice as you know the data is good.

Steve

Edited by teoria_del_big_bang
  • Like 3
Link to comment
Share on other sites

1 hour ago, Ouroboros said:

Some good points there, Olly.  My experience with PI is that I’m OK with it up to the transition from linear to non-linear processing. Up to that transition I’m pretty much working through a sequence of fairly well defined tasks. After the transition to nonlinear it feels like I’m in a multidimensional possibility space. It all comes a lot more subjective. I often feel I’m not sure what the problems are with my image, and even less sure how to set about solving them. I sort of bumble about in this multidimensional space and usually end up with something presentable; and I am getting better at it.   But I often feel a tad dissatisfied with the result. Someone earlier said something about not enjoying processing. I think I might be acquiring a dislike for it too. I’ve got data sitting on my hard drive from last winter waiting for me to ‘do’.   

That’s me too! You couldn’t have described it better. Just last Friday night I found over 24hrs of RGB Ha SII OIII on the Bubble from the 20/21 season I hadn’t looked at yet! I can build an Obsy & automate it to run pretty much autonomously but when it comes to processing I get so far to a point I almost glaze over. It’s like I run out of time & steam!

Anyway, back on to track.. I found StarXterminator & the new NoiseXterminator PI plugin versions very good. I was using the EZSuite noise but so far very impressed with NoiseXterminator & it is soo quick the fans on my Mac have no time to kick in.

  • Like 2
Link to comment
Share on other sites

Best I can tell 3rd party Scripts I have added:-

NormalisedScaleGradient
Generalised Hyperbolic Stretch (GHS) V2

 

Modules I have added:-

StarXTerminator
StarNoiseTerminator
Starnet
Starnet2
 

As far as I can see thats all, not easy to see what is native to PI and what is 3rd party.

Steve 
 

Edited by teoria_del_big_bang
Link to comment
Share on other sites

Thanks @ollypenrice & @teoria_del_big_bang for your encouraging posts. I think I have lost enthusiasm for processing recently, which is a pity because I have been getting some quite nice data from my new-ish ASI2600.   I sometimes find that with interests though. I make some progress and then lie fallow for a while before picking it up again. The trouble with processing is that the learning curve is steep but the ‘forgetting curve’ is even steeper. 😀   A well known symptom of anno domini.  I have been able to type up a several page work-flow for the pre-processing and early processing stages - background subtraction, colour balance etc.  It’s basically a summary of the main steps covered in the first few chapters of Warren Keller’s book Inside Pixinsight. Having that list is a great help.  So far I have been unable to construct a similar ‘algorithm’  for the later stages of post processing. Probably the multiple objective and parameter space of post-processing are less suited to having a ‘list’ of actions. I take on board Olly’s comments about acquiring and understanding the processes.  I recognise I need to attain a more structured approach.  Thanks. 

  • Like 2
Link to comment
Share on other sites

29 minutes ago, Ouroboros said:

I sometimes find that with interests though. I make some progress and then lie fallow for a while before picking it up again. The trouble with processing is that the learning curve is steep but the ‘forgetting curve’ is even steeper. 😀   A well known symptom of anno domini. 

I really do empathise with you as also have felt the same. And the more I do it I think that does get better when you do know what you are doing. There's nothing more frustrating than following a tutorial, or even your own notes to find that the same thing you did on one Nebula doesn't produce the image you wanted on another Nebula, and then processing a Galaxaxy is nothing like what you did before and then have to start working things out all over again. 

And on top of that you can looks at several tutorials all do things differently.
And even then in NB what colours should I really be getting.

It's not easy.

I too find that the forgetting curve seems to override the learning curve, and whilst that is probably made worse with age so I guess its just down to practice and not have long periods of hiatus. 

1 hour ago, Ouroboros said:

 I have been able to type up a several page work-flow for the pre-processing and early processing stages - background subtraction, colour balance etc.  It’s basically a summary of the main steps covered in the first few chapters of Warren Keller’s book Inside Pixinsight. Having that list is a great help.  So far I have been unable to construct a similar ‘algorithm’  for the later stages of post processing.

Something I keep doing but again not easy as different subjects can require different actions. But most are the same for the early stages so maybe I need to make a flow for the linear stage and then for the non-linear there may be more than one workflow, maybe several for different types of target.
I think most people will find the actual stretching and the bit that comes after that the hardest by a long way. Probably because so much depends on the quality of the data, the subject itself, light pollution, whether broadband or narrowband, and very much your expectations of what the image will end up like and steering it in that direction. I think you are correct in that a strict workflow is not even possible as in someway this stage is part artistic licence rather than a strict set of rules you need to follow to get a certain result.

What I do like is Olly's suggestion of maybe a checklist to see if you have achieved certain objectives on the image rather than a list of what to do to get there.

Steve

  • Like 2
Link to comment
Share on other sites

I think a fixed workflow can work for the first few steps. In my case that would be edge crop, DBE or ABE and, sometimes, SCNR green. From my site that's all I need for a decent background sky and colour calibration but from a polluted site I suppose I'd need Background Neutralization and Colour Calibration as well. (Yes, all this is in Pixinsight! 😁)

After that, I don't think one workflow fits all, which is where the ability to see what the picture needs comes in. Say I'm ready to start stretching an Ha image. If it's going to be a standalone Ha I'll use a gentler stretch for a more natural look. If the Ha is going to go into the red channel of an LRGB, however, I'll use a different and far more aggressive stretch with a huge initial lift. This will give me extreme contrasts which will look better once 'diluted' by the softer look of the red channel. A softer Ha stretch would be washed out  and a bit lame when added to red.  Another early decision also concerns the initial stretch: is this stretch going to cover the full dynamic range of the image or will I need separate stretches for the brightest parts, to be blended later? This affects the stretch to be used. Yet another early decision; are the stars going to be a problem? If we have a dense field of bright stars we'll need to plan ahead and either try to mask the stars slightly (always difficult) or remove them altogether with a view to replacing them later.

Etc!!!!

Olly

  • Like 2
Link to comment
Share on other sites

@ollypenrice

Maybe we could divide whole workflow into couple separate steps and then define "standard" and "non standard" ways to do it? For example:

1. Pre processing - this is calibration and stacking. Not much to choose here from - there are only couple of ways one can stack and in reality - one algorithm can cover 99% of cases as being best option

2. Processing - this is again something that can be very "standardized" - as it involves set of steps that are always performed the same:

- cropping if there is a need for it (I would say cropping should be part of stacking process)

- background wipe

- color calibration

- initial stretch (which will be very gentle to allow people to see what is there in the image)

to name few.

3. Post processing. I'd say that this step is optional. Much like daytime photography, one can choose to have image as is from camera after standard processing or decide to enhance it further for wanted effect. This is something that can't and should not be standardized - and I think that software like PS or Gimp is best tool for this (as it is done for that purpose).

 

  • Like 4
Link to comment
Share on other sites

44 minutes ago, vlaiv said:

@ollypenrice

Maybe we could divide whole workflow into couple separate steps and then define "standard" and "non standard" ways to do it? For example:

1. Pre processing - this is calibration and stacking. Not much to choose here from - there are only couple of ways one can stack and in reality - one algorithm can cover 99% of cases as being best option

2. Processing - this is again something that can be very "standardized" - as it involves set of steps that are always performed the same:

- cropping if there is a need for it (I would say cropping should be part of stacking process)

- background wipe

- color calibration

- initial stretch (which will be very gentle to allow people to see what is there in the image)

to name few.

3. Post processing. I'd say that this step is optional. Much like daytime photography, one can choose to have image as is from camera after standard processing or decide to enhance it further for wanted effect. This is something that can't and should not be standardized - and I think that software like PS or Gimp is best tool for this (as it is done for that purpose).

 

Whats your opinion on stacking? DSS, PI or something else?  

Link to comment
Share on other sites

2 minutes ago, Anthonyexmouth said:

Whats your opinion on stacking? DSS, PI or something else?  

I'm probably not the best person to ask that question as I can't put myself in purely user perspective.

I look at it from algorithmic perspective - what sort of data transform I'm happy with. I can say what I think is best thing software should do with data - but none of currently available software does it all.

- There should be option for different binning methods at calibration phase including split binning. Debayering should be also available in split variant (I think there is PI script for this, and perhaps Siril?).

- I think that software should use sophisticated resampling methods like higher order Lanczos resampling (free option that does this is Siril)

- I think that software should do mosaic stitching out of the box (APP does this?)

- I think that subs should be linearly normalized (PI does this) with LP gradient compensation (no software does this)

- I think there should be per pixel weighting instead of per sub weighting for SNR optimization (no software does this).

I would also like to see some advanced features like FWHM matching with use of deconvolution and blurring) and things like that.

If you are asking what is my current workflow when stacking images?

I use ImageJ to calibrate data myself (including any split debayering or binning needed). Then I use Siril to register data. I use ImageJ if I need to stitch mosaics. I use ImageJ plugins that I wrote for sub normalization and plugin that I wrote for stacking (both of which handle those special cases I like).

I would certainly be happy to have one software package where I would not need to manually take care of these steps. I might end up writing one myself for that purpose.

 

Link to comment
Share on other sites

11 minutes ago, Anthonyexmouth said:

Whats your opinion on stacking? DSS, PI or something else?  

I know you are asking vlaiv but as you have PI have you used the WPBB scrips ?
For me it was a gamechanger.
Yes, like most scripts it does nothing that you cannot do manually with PI but it automates everything, pretty much you can add all your files, the usual, lights, darks, flats, dark flats or flat darks whatever you call them, bias, and generally the defaults are fine just press go and it does everything for you from calibration to stacking.
It allows for you to change various things, including the registration and integration parameters if you want, or let the script determine these depending on number of frames,, you can use normalisation, drizzle as you do with manual integration.
If you do not have some correct exposure darks you get the option to use darks of differing exposures and PI will scale them for the exposure lengths of the lights.
When all things are there is a tab to check everything is there that is needed.
image.thumb.png.70950d966ec70686effed5e5733f9792.png

Also can click on the lights, or even flats and darks if not using masters to see a nice calibration diagram.
 image.png.71160854105716ff27694b2606e9c075.png

You can include cosmetic correction too.

As I said you can do normalisation and drizzle if you want but I prefer to not select these options and use the NormaliseScaleGradient script to weight the images and it does, in my opinion, a better job of normalisation, and if you want to drizzle it can now do that too.

image.png.82e01718f0546a3f51fa59170dbda8d4.png

image.png.091b0fe42d21242cff47da6d862ad15f.png

Steve

  • Like 1
Link to comment
Share on other sites

What's worth mentioning now as it's a big "PI" deal is that not too long ago using the WBPP script would bring up a PI warning box that doing so was sub optimal and for best results it was advisable to do things manually. That is no longer the case, the warning has gone and the PI advice is the script is the preferred method now....

Also as Steve will no doubt testify to there has been a lot of work, and friction, gone into making the Local Normalisation tool an "option" to NSG......

Edited by scotty38
  • Like 1
Link to comment
Share on other sites

21 minutes ago, teoria_del_big_bang said:

I know you are asking vlaiv but as you have PI have you used the WPBB scrips ?
For me it was a gamechanger.
Yes, like most scripts it does nothing that you cannot do manually with PI but it automates everything, pretty much you can add all your files, the usual, lights, darks, flats, dark flats or flat darks whatever you call them, bias, and generally the defaults are fine just press go and it does everything for you from calibration to stacking.
It allows for you to change various things, including the registration and integration parameters if you want, or let the script determine these depending on number of frames,, you can use normalisation, drizzle as you do with manual integration.
If you do not have some correct exposure darks you get the option to use darks of differing exposures and PI will scale them for the exposure lengths of the lights.
When all things are there is a tab to check everything is there that is needed.
image.thumb.png.70950d966ec70686effed5e5733f9792.png

Also can click on the lights, or even flats and darks if not using masters to see a nice calibration diagram.
 image.png.71160854105716ff27694b2606e9c075.png

You can include cosmetic correction too.

As I said you can do normalisation and drizzle if you want but I prefer to not select these options and use the NormaliseScaleGradient script to weight the images and it does, in my opinion, a better job of normalisation, and if you want to drizzle it can now do that too.

image.png.82e01718f0546a3f51fa59170dbda8d4.png

image.png.091b0fe42d21242cff47da6d862ad15f.png

Steve

No, not yet, I tried watching a few youtube videos but it went over my head. I have messed with the trail a few times over the years but only bought it properly this last weekend and due to an screw up with calibration frames i'm just in the process of rebuilding my library so got no data to play with at the moment. Hopefully have it all done soon and wait for a clear night. 

  • Like 1
Link to comment
Share on other sites

14 minutes ago, scotty38 said:

What's worth mentioning now as it's a big "PI" deal is that not too long ago using the WBPP script would bring up a PI warning box that doing so was sub optimal and for best results it was advisable to do things manually. That is no longer the case, the warning has gone and the PI advice is the script is the preferred method now....

Also as Steve will no doubt testify to there has been a lot of work, and friction, gone into making the Local Normalisation tool an "option" to NSG......

I think an awful lot of work has gone into WPBB last 12 months or so, and up until maybe beginning of 2021 (cant remember exactly time goes so damned quick) I did not really use it and did it all manually but to me seems superb now and can see why you wouldn't use it, and once used I think it sort of sticks with you exactly what to do, such an easy learning curve,, its relatively simple and just works, as far as I can see anyway.

Regarding the NSG script yes there is some sort of friction between the makers of PI and John Murphy who was the instigator of NSG with help I think from Adam Block and probably others, but not really certain what it is all about except the last major upgrade of PI for some reason deleted the NSG script and i had to load it through the repository.

I believe PI have changed something in the way they weight frames for integration and believe it is how it should be done.
I cannot say for certain who is right or wrong and only did one comparison on some data and I was convinced the NSG weighting gave a better result on integration than letting PI do it in WPBB, more to do with the background being better normalised and as a result the subject seem to appear more defined. 

Now whether DBE would have coped with both results equally I am not sure but it certainly makes applying DBE much easier as it has very little to do, in fact could probably leave it out altogether on my test.

I intend to try the same test on other data too at some stage , but to be honest using NSG is so simple I see no reason not to use it, except I suppose if the WPBB script works just as well then it can be done all automatically and so less actions required .

Steve

Link to comment
Share on other sites

52 minutes ago, teoria_del_big_bang said:

I believe PI have changed something in the way they weight frames for integration and believe it is how it should be done.

Do you know based on what weights are decided?

There is fairly simple way of determining proper weight for simple average of values - provided that you have both value and noise of every sample - but

a) we don't have that in an image

b) there is no single weight that will be suitable for whole image

Even if we had exact noise and signal levels - one set of weights is optimal for only one set of signal/noise pairs.

Imagine following scenario - one images at the edge of the town in direction away from LP, but target is lower in altitude, then as mount tracks - target gets higher but also towards higher LP.

After frame normalization - you can end up with case where background noise levels are equal but noise levels associated with target signal are different (or vice verse). Which weights are you going to use? 0.5 : 0.5 - which suits background noise levels, or say 0.6 : 0.4 that suits noise associated with target?

 

 

Link to comment
Share on other sites

Just to expand on above - correct answer is:

use 0.5-0.5 for background parts of the image and 0.6 - 0.4 weights for target (or each part of image should have its own weights assigned to subs that will provide close to optimum solution - that is what I mentioned above under "per pixel" weighting).

Link to comment
Share on other sites

I stack in AstroArt and find it intuitive, organized, effective and incredibly fast. The speed has become much more important since we started shooting larger numbers of shorter subs with CMOS cameras.

Olly

  • Like 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.