Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

Moving from PixInsight to Photoshop


Recommended Posts

I like PixInsight and I use it for most of my processing.  However, I am not so skilled in its use that I can do 'everything' with it and I invariably end up in Photoshop towards the end of processing, so that I can add just those final (and usually catastrophic) 'tweaks'.

When the time is right, I generally save a copy of my PixInsight FITS or XISF file as a 16 Bit Tiff.  I have tried leaving it as 32 bit (PS can open such files).  However, it seems that a number of my usual PS processes don't like the 32 bit files ('Despeckle' for example).  Also, I have had very ropy results converting the 32 bit file to 16 bit within PS (I get blown out whites).

So I suppose my first question is: is what I am doing (that is saving a 16 Bit image from PixInsight) the correct way of going about things?

The reason I ask is that when I do this with my carefully stretched non-linear files my black point seems less than optimal.  Look at the big gap on the left of my levels histogram, for example:

 post-39248-0-54614300-1447433158.jpg

And when I use the eyedropper tool to measure the level of my background sky, I get:

post-39248-0-30062700-1447433158.jpg

I just wanted to check that this is what others find and that I don't have something out of whack somewhere.

Ta

Link to comment
Share on other sites

  • Replies 25
  • Created
  • Last Reply

Sorry no answers but I can confirm that I have the same issues importing 32 bit into Photoshop. The PS world is very different to PI. In PI we work happily with raw/linear/non-linear/8 bit/16 bit/32 bit/ float. In PS I find anything other 16 bit rather frustrating.



PS is of course great for the tweaks that PI lacks.



So I export 16bit for PS, as you do.



I am still scrabbling up the learning curve so wait to see what others say. I bet there is a secret ctrl/alt/shift import function.

Link to comment
Share on other sites

Once your data is no longer in linear format then 16 bits (per channel) should be good enough to do the rest in PS (as long as you have stretched it properly, i.e. making full use of the 16 bits per channel available to you by not clipping highlights and blacks).

In your example, you could adjust the histogram in PI and cut off the lower end as long as you don't start clipping the blacks (it will tell you how much you are throwing away).

PS only supports limited functionality at 32 bits per channel.

Link to comment
Share on other sites

I always finish off my images in PS after the intial PI backgroundneutralisation, colour calibration, DBE or ABE and Histogram transform.

In Pixinsight I use the STF stretch on the histogram, I believe this is an near optimum stretch that does'nt clip.

From PI I always save the image as 16bit for PS, this seems to work well.

Link to comment
Share on other sites

I always finish off my images in PS after the intial PI backgroundneutralisation, colour calibration, DBE or ABE and Histogram transform.

In Pixinsight I use the STF stretch on the histogram, I believe this is an near optimum stretch that does'nt clip.

From PI I always save the image as 16bit for PS, this seems to work well.

I have a different experience.  I almost never use STF for the actual stretch.  I find that the STF can be way too harsh, especially after DBE.  That may be because I have lesser quality data or I am getting worse gradients requiring more 'aggressive' DBE.  Thanks for the reassurance about 16 Bit to PS.    

Link to comment
Share on other sites

I have the same experience going from PixInsight to Photoshop (histogram gets thrown out of whack) and can't entirely explain it, except to say that I know of no way to go from 32 bits of data down to 16 without losing a fair amount of information. I.e., it might be an artifact of reducing the range.

Things usually clean up well enough afterwards but it was unnerving at first.

Link to comment
Share on other sites

I do DBE and SCNR (green) in PI and then head off into civilization (Photoshop.) :grin:  I heartily dislike the PI approach. Let's take this from the starting point. We all want to 'zone' our processing. We may want to noise reduce our faint stuff and sharpen our bright stuff (but not the stars.) Nobody will disagree with this. But how do we define our zones? In PI you have to make masks which distinguish one zone from another. What if the mask is nearly right but not quite right?  Oh, such hassle! In Photoshop...

...you make two layers of the same thing. You carry out a global process on the bottom layer - be that sharpen or noise reduce or whatever - and using your eyes and your human faculties you go to the unmodified top layer and let through as much or as little of the modified layer as you want, where you want, using Ps selection tools and your eyes!  Yes, and when you have finished your project, what do you look at it with? Rest my case.

Well, this suits me. I've heard it argued that PI is more scientific but this is guff. A graphics programme (PI and Ps are just that) is not a scientific programme.

Another thought. We all do this for pleasure. For me most of the pleasure lies in post processing since the rest is pretty mechanical, whch isn't to say that you can't make a hash of it. You can. I often do! But personally I like working in Ps and Layers. Others may like working in PI. SInce we are doing this for pleasure we should do it in ways which we enjoy.

Olly

Link to comment
Share on other sites

I have a different experience.  I almost never use STF for the actual stretch.  I find that the STF can be way too harsh, especially after DBE.  That may be because I have lesser quality data or I am getting worse gradients requiring more 'aggressive' DBE.  Thanks for the reassurance about 16 Bit to PS.    

Everything is harsh after DBE. Removing the background gradient exposes the underlying noise, so images tend to look worse after DBE in terms of noisiness (mine certainly do as a darks dodger). I actually find that using the stats from STF for target background is a good marker for using MaskedStretch rather than applying the linked channel STF to images anyway.

I do find I spend a lot of my time in PI getting masks right, and this is key to getting good results in the later stages, when cosmetic concerns on certain areas of the image start to dominate over the wider and more generally applicable linear calibration routines, so might actually revisit PS for some post processing given it's use by some experienced imagers here. Unlike Olly though, I really like the PI approach :)

Whenever I have done the PI->PS move, I have found 16 bit TIF is the only way to go, although it seems to come out lighter in PS, but this is easily managed.

Link to comment
Share on other sites

Pixinsight and Photoshop can live happily side by side, Pixinsight for the heavy lifting in my case - calibrate, align and combine images, then neutralise background, DBE and colour calibration before an initial stretch. Then it's save as a 16bit TIFF and straight in to photoshop for the "fancy" stuff, curves adjustment, despeckle, selective colour adjustment, star reduction and sharpening.

All the things which are really noticeable are done in Photoshop always with as Olly puts it the "zone" method in mind, Pixinsight is just the tool that gets me to the point photoshop can take over.

Pixinisight is great but not to everyone's liking, it is very easy to get carried away and end up way over processing in pixinsight, for example the HDR wavelets tool should come disabled with a health warning attached before it unlocks and allows you anywhere near using it on a real life image! :shocked:

Link to comment
Share on other sites

Everything is harsh after DBE. Removing the background gradient exposes the underlying noise, so images tend to look worse after DBE in terms of noisiness (mine certainly do as a darks dodger). I actually find that using the stats from STF for target background is a good marker for using MaskedStretch rather than applying the linked channel STF to images anyway.

I do find I spend a lot of my time in PI getting masks right, and this is key to getting good results in the later stages, when cosmetic concerns on certain areas of the image start to dominate over the wider and more generally applicable linear calibration routines, so might actually revisit PS for some post processing given it's use by some experienced imagers here. Unlike Olly though, I really like the PI approach :)

Whenever I have done the PI->PS move, I have found 16 bit TIF is the only way to go, although it seems to come out lighter in PS, but this is easily managed.

Thanks for replying Matt.  What I meant was that I could not do an STF and then drag those STF settings on to the Histogram Transformation tool.  The result would be pretty awful in many cases.  You may have to talk me through the 'stats as good marker for Masked Stretch' thing.

Some PI masks are simplicity themselves, but others are a bit of a chore I find - getting a good star mask, for example, can require running a reasonably lengthy process quite a few times.  And whilst these masks can be tweaked in various ways (including Clone Stamp which I use a fair bit), they do not have anything like the simplicity of Photoshop.  Furthermore, the use of layers in PS means that things can be unwound very easily or different combinations of 'processes' on these Photoshop layers can be tried out in seconds.  I've been using PS for years mind so am quite comfortable with it.

Link to comment
Share on other sites

I do DBE and SCNR (green) in PI and then head off into civilization (Photoshop.) :grin:  I heartily dislike the PI approach. Let's take this from the starting point. We all want to 'zone' our processing. We may want to noise reduce our faint stuff and sharpen our bright stuff (but not the stars.) Nobody will disagree with this. But how do we define our zones? In PI you have to make masks which distinguish one zone from another. What if the mask is nearly right but not quite right?  Oh, such hassle! In Photoshop...

...you make two layers of the same thing. You carry out a global process on the bottom layer - be that sharpen or noise reduce or whatever - and using your eyes and your human faculties you go to the unmodified top layer and let through as much or as little of the modified layer as you want, where you want, using Ps selection tools and your eyes!  Yes, and when you have finished your project, what do you look at it with? Rest my case.

Well, this suits me. I've heard it argued that PI is more scientific but this is guff. A graphics programme (PI and Ps are just that) is not a scientific programme.

Another thought. We all do this for pleasure. For me most of the pleasure lies in post processing since the rest is pretty mechanical, whch isn't to say that you can't make a hash of it. You can. I often do! But personally I like working in Ps and Layers. Others may like working in PI. SInce we are doing this for pleasure we should do it in ways which we enjoy.

Olly

I agree.  I don't think I use PS quite the same way as you do.  I put new layers on top then mask them out, paint in what I want on the mask, change opacity, and so forth.  In my last image I simply created a 'Merged Visible' layer over the top of the existing layers (hold the ALt key while pulling up the layers menu), did some sharpening on this layer, masked it out completely with the paint bucket and then painted in on the areas where I wanted to apply the sharpening.  Finally, a reduction in opacity to dial back the sharpening to taste.  It was very quick - I see no way of doing something similar (with that degree of fine control) in PixInsight.  

Link to comment
Share on other sites

Everything is harsh after DBE. Removing the background gradient exposes the underlying noise, so images tend to look worse after DBE in terms of noisiness (mine certainly do as a darks dodger).

I have only found this to be the case when correcting images with really bad gradients. Could you post an example?

Link to comment
Share on other sites

I have only found this to be the case when correct images with really bad gradients. Could you post an example?

No need.  I accept that the images that I was referring to had significant gradients.  Before DBE, STF produces a fine looking image.   

Link to comment
Share on other sites

I have only found this to be the case when correcting images with really bad gradients. Could you post an example?

Maybe I need to clarify in that to me, things look noisier after a DBE process. Noise is not increased or decreased by DBE which is just a background subtraction/division, but reapplication of STF can make the image look noisier because of the different stretch. I can knock up some examples later, although you have me doubting my own statement now :)

Link to comment
Share on other sites

Maybe I need to clarify in that to me, things look noisier after a DBE process. Noise is not increased or decreased by DBE which is just a background subtraction/division, but reapplication of STF can make the image look noisier because of the different stretch. I can knock up some examples later, although you have me doubting my own statement now :)

Oops - didn't mean to I promise     :argue:

Link to comment
Share on other sites

Here is what I meant. This is an L image of M13 and I applied the STF and then materialized that via the histogram (copied the STF to the histogram and then applied to the image. The first image is pre DBE. The image has a mild gradient which is handled easily in DBE. The subtraction of the background makes the image darker, and a reapply of the STF gives the second image. This looks much noisier, although there is no new noise added, just it looks worse.

7a90984fd348a6f28cc09e04f171d9cd.1824x0_

9b73dfac43aae70002661cbba783408f.1824x0_

Link to comment
Share on other sites

You may have to talk me through the 'stats as good marker for Masked Stretch' thing.

I would only tell the story worse than Harry in his marvellous PI videos. Check out the one called MaskedStretch at the bottom of the intermediate section:

http://www.harrysastroshed.com/pixinsight/pixinsight%20video%20html/pixinsighthomeinter.html

Link to comment
Share on other sites

Ah, OK so it is just the way that DBE effects the calculation of how far STF stretches the image, not actually changing the image itself? STF is just to give you a rough idea of the data contained within an image, so that is pretty arbitrary really as I certainty wouldn't trust a piece of software to stretch my images for me, the eye is far better than the equation!

Link to comment
Share on other sites

  • 2 weeks later...

I'm intrigued by the fact that lots of people use the STF stretch. This is the same as the 'moving grey point' stretch in Ps Levels. That is, it's a pure log stretch. Am I right? Now this is a stretch I would normally only use for an RGB layer. In NB I would always use a radically more aggressive custom curve stretch rising steeply and flattening to a straight line at the top.

A lum layer usually gets a slightly less aggressive version of the same.

Subsequent iterations get softer, more rounded curves.

And finally very bright bits get their own gentler stretch to be layered in later.

I think Tom and I started following this method after watching some Adam Block demos years ago and I've always done it that way.

Olly

Link to comment
Share on other sites

I always assumed that everyone uses STF to see what is there and then did custom stretches, not sure how many materialise the STF it into an actual stretch (other than doing it to generate quick Lum masks for instance).

Another thing that has only just become clear to me from this thread and others is that the larger the image, the more need there is for a more custom approach to various parts of the image. I have a small chipped CCD, so all my processing tends to be easily done in PI, so I couldn't see the need for PS at all and wondered what the fuss was about. The realisation that most of the PS users are using it with much bigger images than mine, with areas that need different work which is difficult for PI to easily manage only came to me when I tried to process a mosaic in PI and the different areas of the image just couldn't be easily done the PI way. It means getting good with PS though and some more learning to do :)

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.