Jump to content

Image processing times ?


Davey-T

Recommended Posts

Just wondering how long people spend processing their images.

I usually only spend about an hour, after calibration and stacking, either because my images are [removed word] to start with, or maybe because whatever I do, after a bit of levels and curves etc, only seems to make them

worse :)

I only ask because just browsing August AN and it has an image of the Veil Nebula with 32 hours processing by Noel Carboni , can't imagine taking that long on one image even if it is a mozaic.

Dave

Link to comment
Share on other sites

Depends on the data really. If I have an Ha image and its staying monochrome I generally process that in a couple of hours max. An RGB or narrowband image can take me many hours, perhaps 10 hours or so. A coloured mosaic will take even longer.

Link to comment
Share on other sites

Never less than 4 hours for me. A more difficult image (lots of faint stuff, Ha and O111 to combine with the colours, lots of zoning requiring different processing) might take 20 hours and I dare say I've matched Noel's marathon on occasion.

I get quite close to the end in about a third of the time but the final agonizing takes a lot longer! Stars and background sky generally take more effort than the object itself.

Sometimes I get near the end and discover that a different starting point would have been a good idea. That can be a bit of a trauma!

Olly

Link to comment
Share on other sites

Never less than 4 hours for me. A more difficult image (lots of faint stuff, Ha and O111 to combine with the colours, lots of zoning requiring different processing) might take 20 hours and I dare say I've matched Noel's marathon on occasion.

I get quite close to the end in about a third of the time but the final agonizing takes a lot longer! Stars and background sky generally take more effort than the object itself.

Sometimes I get near the end and discover that a different starting point would have been a good idea. That can be a bit of a trauma!

Olly

But then you've got lots more and better data than us poor slobs in the UK :)

Dave

Link to comment
Share on other sites

But then you've got lots more and better data than us poor slobs in the UK :)

Dave

Heh heh, true, probably. Ah, but on the other hand I don't have any excuses! I'm still working on my own Veil, in fact. The specific problems with this target are;

- the vast number of stars. Unless they are kept down in size they'll look out of scale in the image but...

- reducing stars embedded in nebulosity is difficult.

-There is a need to get the faint NB data to show because it is there to be shown. Easy to get it to show in a narrowband only image but it gets lost in a natural colour image unless...

-You use the NB data quite heavily in Luminance, which I hate doing because it...

- Messes up the colour, which needs to be dragged back and made to behave!

- Also there is literally no normal background sky in the region. The sky of the inner section has been swept clean by the shock front and is ultra dark. The front has, however, swept up interstellar atoms and molecules and driven them ahead of the main arcs, so outside the arcs really is not dark. The Ha layer shows this and the information needs to be preserved in the final sky.

Thinking about it, Noel must have raced through this lot!! :grin:

Olly

Link to comment
Share on other sites

I spend a lot of time processing my data because I am not that good at remembering all the techniques and have to constantly refer to tutorials and posts but I really enjoy this side of it as well.As Tracey,my other half says, I lose all sense of time when I go on the computer

Link to comment
Share on other sites

Only 32 hours? Must try harder! I have been working on my two panel DSLR mosaic of Markarian's Chain since May, and I have put in (conservatively) 50 hours on and off as time allows. It isn't finished yet and probably won't be for a while. Most of my previous attempts have taken eight hours at most, so why so much time?

- Firstly I have got a fair bit of data for once as the weather was cooperative in April and early May, pretty much four night's worth over just two panes. I wanted to do it justice whereas I would perhaps have spent less time if I only had four or five hours of subs.

- Secondly I am determined to do the best I can to nail the background noise as much as I can. The 500D seems particularly bad at turning the background in to a blotchy mess of coloured blobs in the final result. I have struggled to suppress it in previous images and never been entirely happy with the final result; have worked through pretty much every example/tutorial on this I can find with limited success.

- So this time I first spent a lot of time selecting and stacking the subs. I tested various workflows to death in PI, stacking and re-stacking to get the best initial SNR stats before ploughing on. Hopefully that isn't effort I will have to repeat on future images since I have now satisfied myself as to the best method for my kit and data. Bear in mind I had 2 x 50 or so large DSLR subs, plus several hundred bias, plus multiple sets of 30-50 darks so stacking and re-stacking that lot involves a lot of time to set up, process and analyse.

- The next major hurdle was making the mosaic. The basic technique I picked up quickly enough but getting a seamless transition was quite tricky using PI's tools (no masked/layered blending for example). The brightness levels were not too hard to match. Colour balancing the pair was a bit more tricky, but the hard one was matching the background noise between the two. Some of the flats taken for the earlier subs were pretty defective in the red channel so there was a noticeable seam in the background noise which took a bit of time to deal with. (Too late to fix the defective data and can't get any more until the spring!)

- Again I worked through pretty much every noise reduction routine in PI in detail, going back and forth trying different techniques and following endless examples and tutorials. Right in the middle of that the new TGVDenoise tool was introduced, and after a fair bit of experimentation (with no tutorials to follow) it has turned out to be a real killer of DSLR background noise, so I am a lot happier with the background in this image than any other.

- Then I (foolishly) though I'd like to try a bit of deconvolution, so I backed up a couple of steps and added that in to the workflow. This is where I am currently labouring away. First off I have got a fair bit of coma in the corners of each frame (probably need to look at the reducer spacing again, but I suspect it is an inherent issue with using the ED80, 0.85 reducer and a large DSLR sensor - wouldn't be an issue on a CCD chip of about half the size). Thus it would be no use creating a PSF from the whole frame, so I devised a process to create different PSFs for 9 segments of the image, and learned how to use PixelMath to create a mask for each of those segments only to use with the deconvolution process.

- Finding workable deconvolution parameters is a real time sink. It is a slow process of trial and error and each iteration takes a long time to process. I have got some really satisfactory results on the galaxies, with a noticeable improvement in detail even before any other processing to bring it out. What isn't working is the stars. it is pretty clear that they are most bloated in the red channel and least bloated in the blue channel (as one might expect). It has proved pretty much impossible to shrink all three channels equally, and after re-applying the noise reduction processes I am getting a fairly hideous red halo/bleed around the large and mid-sized stars.

- I am thinking of not bothering to deconvolve the stars but just do the galaxies, but for that I will have to create new masks which will take a few hours, plus a few hours more to re-run all the subsequent processes.

This is a bit of an exceptional example, as I am trying a lot of new things which involves both learning and experimentation, but the one thing I have found from the many examples I have read is that there is always some tweak you can apply to improve your image; depending on the weather this winter I suspect I may be going back and re-processing some of my previous images based on what I have learned with this one.

On the other hand, if I ever get the funds together for a CCD I am hopeful my processing on some of the basics will take less time than with the DSLR, as I won't have to spend as long battling the limitations and deficiencies of the kit (i.e. hideous noise!)

Link to comment
Share on other sites

Only 32 hours? Must try harder! I have been working on my two panel DSLR mosaic of Markarian's Chain since May, and I have put in (conservatively) 50 hours on and off as time allows. It isn't finished yet and probably won't be for a while. Most of my previous attempts have taken eight hours at most, so why so much time?

Don't think getting a CCD will help, I've been working on a six panel mozaic of NA and Pelican for three years :)

Dave

Link to comment
Share on other sites

I've only rarely done any serious deep-sky editing, and my most recent attempt (M27) took about 1-2 hours of editing (Which seems like overkill for 8 minutes of data). When I do lunar/planetary, a single shot may take me 30 minutes, and when combined into a mosaic, take much longer.

David

Link to comment
Share on other sites

I spend at least as long processing the image as collecting the data, normally much longer. Generally, most of a day, then I come back to it a few days later and spend another few hours tweaking.

Link to comment
Share on other sites

Only 32 hours? Must try harder! I have been working on my two panel DSLR mosaic of Markarian's Chain since May, and I have put in (conservatively) 50 hours on and off as time allows. It isn't finished yet and probably won't be for a while. Most of my previous attempts have taken eight hours at most, so why so much time?

- Firstly I have got a fair bit of data for once as the weather was cooperative in April and early May, pretty much four night's worth over just two panes. I wanted to do it justice whereas I would perhaps have spent less time if I only had four or five hours of subs.

- Secondly I am determined to do the best I can to nail the background noise as much as I can. The 500D seems particularly bad at turning the background in to a blotchy mess of coloured blobs in the final result. I have struggled to suppress it in previous images and never been entirely happy with the final result; have worked through pretty much every example/tutorial on this I can find with limited success.

- So this time I first spent a lot of time selecting and stacking the subs. I tested various workflows to death in PI, stacking and re-stacking to get the best initial SNR stats before ploughing on. Hopefully that isn't effort I will have to repeat on future images since I have now satisfied myself as to the best method for my kit and data. Bear in mind I had 2 x 50 or so large DSLR subs, plus several hundred bias, plus multiple sets of 30-50 darks so stacking and re-stacking that lot involves a lot of time to set up, process and analyse.

- The next major hurdle was making the mosaic. The basic technique I picked up quickly enough but getting a seamless transition was quite tricky using PI's tools (no masked/layered blending for example). The brightness levels were not too hard to match. Colour balancing the pair was a bit more tricky, but the hard one was matching the background noise between the two. Some of the flats taken for the earlier subs were pretty defective in the red channel so there was a noticeable seam in the background noise which took a bit of time to deal with. (Too late to fix the defective data and can't get any more until the spring!)

- Again I worked through pretty much every noise reduction routine in PI in detail, going back and forth trying different techniques and following endless examples and tutorials. Right in the middle of that the new TGVDenoise tool was introduced, and after a fair bit of experimentation (with no tutorials to follow) it has turned out to be a real killer of DSLR background noise, so I am a lot happier with the background in this image than any other.

- Then I (foolishly) though I'd like to try a bit of deconvolution, so I backed up a couple of steps and added that in to the workflow. This is where I am currently labouring away. First off I have got a fair bit of coma in the corners of each frame (probably need to look at the reducer spacing again, but I suspect it is an inherent issue with using the ED80, 0.85 reducer and a large DSLR sensor - wouldn't be an issue on a CCD chip of about half the size). Thus it would be no use creating a PSF from the whole frame, so I devised a process to create different PSFs for 9 segments of the image, and learned how to use PixelMath to create a mask for each of those segments only to use with the deconvolution process.

- Finding workable deconvolution parameters is a real time sink. It is a slow process of trial and error and each iteration takes a long time to process. I have got some really satisfactory results on the galaxies, with a noticeable improvement in detail even before any other processing to bring it out. What isn't working is the stars. it is pretty clear that they are most bloated in the red channel and least bloated in the blue channel (as one might expect). It has proved pretty much impossible to shrink all three channels equally, and after re-applying the noise reduction processes I am getting a fairly hideous red halo/bleed around the large and mid-sized stars.

- I am thinking of not bothering to deconvolve the stars but just do the galaxies, but for that I will have to create new masks which will take a few hours, plus a few hours more to re-run all the subsequent processes.

This is a bit of an exceptional example, as I am trying a lot of new things which involves both learning and experimentation, but the one thing I have found from the many examples I have read is that there is always some tweak you can apply to improve your image; depending on the weather this winter I suspect I may be going back and re-processing some of my previous images based on what I have learned with this one.

On the other hand, if I ever get the funds together for a CCD I am hopeful my processing on some of the basics will take less time than with the DSLR, as I won't have to spend as long battling the limitations and deficiencies of the kit (i.e. hideous noise!)

OK, I'm going to say it. This is a classic PI problem. You have to spend hours faffing with parameters which give you the exclusion/inclusion zones you want for differential processing. In Photoshop you just create two layers, process one of them according to the effect you want to acheive and then erase the bits you don't want usng the Mk1 eyeball evolved over millions of years. You don't have to find out how to ask the graphics programme to identify just what you want, you choose it yourself and select it. Bang whallop.

Markarian's is an incredibly easy target in my book. Well, no, it is a relatively easy target since they are all cussed devils! :BangHead: But if Markarian's is eating up your hours you are going to have a rare old time with the Praying Ghost, the Veil, the Flaming Star, Thor's Helmet... These are really difficult.

Dealing with stars and background sky is quite easy in Ps. Just don't stretch them very much. Don't use a log stretch, use a Curve which flattens early. For the galaxies this won't do, but do a galaxy stretch as a different layer and again blend them using your eyes and an eraser.

For this kind of work PI is like trying to sign your autograph with someone nudging your elbow. (Phrase borrowed from Bruce McLaren regarding his 4WD F1 car in the sixties...) PI is not more 'scientific' than PS, it just uses a 'numbers only' interface with which it is hard to communicate - just as you say. It has some great global tools without which I'd be lost but layers and the eraser put the human being in charge.

Olly

Link to comment
Share on other sites

I've spent well over a week processing the image data for a single star trail photograph. I was processing over 2000 raw files individually and then blending them together for the final result. It would of been much easier if all the aircraft had been grounded

Sent from my iPhone using Tapatalk. Blame Apple for the typos and me for the content

Link to comment
Share on other sites

Well you can do most of that in Pixinsight :)

Harry

You can? Didn't know that. WHen you say 'most' what is it you can't do?

I admit I do use Photoshop as well - to make a frame around my image with my name, object name, data used and date of capture! :)

Heh heh, and you use a Takahashi FSQ106 as a guidescope!! :grin: (Actually I had a guest who did that. I've never forgiven him!!!)

Olly

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.