Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

My images don't have a "POP" to it


Recommended Posts

  • Replies 44
  • Created
  • Last Reply

Roughly, here's what i did

1) Stacked and calibrated using MaximDL 

2) Use DBE and MLT in linear stage

3) Star alignment

4) Pixel Math for channel combination

5) Colour calibration and Background Neturalisation

6) Deconvolution

8 ) Colour mask script

9) User the curves to play around the with background

 

Link to comment
Share on other sites

I think your best chance of getting down to the nitty-gritty of what you're trying to achieve, is to start a thread on one specific target & data set. Share the data, along with your best attempt, then wait and see what others can do with the same data. When you see one that ticks all the boxes for you, then just kindly ask for the workflow and settings (P.I has many!) that were used. Armed with this info, it won't take long for you to figure out what it is specifically that you feel is holding you back.

I think @Rodd did this with quite a few data sets. They made for great threads, were very popular, and no doubt many people here learned an awful lot from them. There are a lot of highly-skilled P.I guys out there that could no doubt show you just how good your data is. @wimvb is one who certainly springs to mind.

Link to comment
Share on other sites

On 28/09/2018 at 13:11, souls33k3r said:

Roughly, here's what i did

1) Stacked and calibrated using MaximDL 

 

Use PI for calibrating and stacking, the noise assessment and control in PI's stacking algorithms is much more advanced than MaxIm's so that less noise will end up in your stacked output files. I use MaxIm for capture as I use ACP for observatory control and that won't work with anything other than MaxIm but I would never go back to MaxIm for any part of post processing after using PI's routines.

Using your low quality .jpg forum image, attached below is a comparison of original and further processing in PI using a little ACDNR to reduce noise in the faint parts and a little ArcsinhStretch to reveal more of the faint nebulosity regions (original above, modified below).

Before_after_ACDNR_ArcsinhStretch.thumb.jpg.8b8a320ee59193ea2db1ac5d00fa33c7.jpg

 

Link to comment
Share on other sites

56 minutes ago, Xiga said:

I think your best chance of getting down to the nitty-gritty of what you're trying to achieve, is to start a thread on one specific target & data set. Share the data, along with your best attempt, then wait and see what others can do with the same data. When you see one that ticks all the boxes for you, then just kindly ask for the workflow and settings (P.I has many!) that were used. Armed with this info, it won't take long for you to figure out what it is specifically that you feel is holding you back.

I think @Rodd did this with quite a few data sets. They made for great threads, were very popular, and no doubt many people here learned an awful lot from them. There are a lot of highly-skilled P.I guys out there that could no doubt show you just how good your data is. @wimvb is one who certainly springs to mind.

Cheers mate, that sounds like a solid plan. I'll go through Rodd's activity in a few minutes and try to learn a thing or two.

26 minutes ago, Oddsocks said:

Use PI for calibrating and stacking, the noise assessment and control in PI's stacking algorithms is much more advanced than MaxIm's so that less noise will end up in your stacked output files. I use MaxIm for capture as I use ACP for observatory control and that won't work with anything other than MaxIm but I would never go back to MaxIm for any part of post processing after using PI's routines.

Using your low quality .jpg forum image, attached below is a comparison of original and further processing in PI using a little ACDNR to reduce noise in the faint parts and a little ArcsinhStrech to reveal more of the faint nebulosity regions (original above, modified below).

Before_after_ACDNR_ArcsinhStretch.thumb.jpg.8b8a320ee59193ea2db1ac5d00fa33c7.jpg

 

That's very nice rendition mate. You've given me more food for thought :) I tried PI for calibration and stacking but found some weird artifacts show up. Maybe it's something i'm doing wrong. The end pattern was grid like structure. 

Link to comment
Share on other sites

23 minutes ago, souls33k3r said:

Cheers mate, that sounds like a solid plan. I'll go through Rodd's activity in a few minutes and try to learn a thing or two.

That's very nice rendition mate. You've given me more food for thought :) I tried PI for calibration and stacking but found some weird artifacts show up. Maybe it's something i'm doing wrong. The end pattern was grid like structure. 

If you use the batch preprocessing script you have to be careful with the optimization of darks--or scaling, as well as the use of bias masters.  With the STT-8300 both work well and I always use them.  however, with the ASI 1600 (cmos), neither works well and if I use them I end up with terribly calibrated subs.  

Rodd

Link to comment
Share on other sites

I don't do imaging but I read the image specs when I look at astrophotos. Maybe it's subjective but over the years I seem to notice Lumenera cameras produce snappier deep sky photos and sharper solar system photos. I like how the clusters show a whole range of distinct colors with vibrant yellow, orange and blue stars.

Also, when I assisted my friends in making astrophotos (their mounts and my optical tubes, my Celestron 5 at f/6.3 was neat) I saw that boosting the ISO number was not as efficient as the numbers suggest. A case I remember well is NGC 40, the red shell nebula. We did a few tries between ISO 100 and 800, well the 800 and 400 shots were a bit faster in acquiring the fainter parts but the image was a little washed out and grainy.

There's no way acquisition was four times or eight times faster; we did 5 minutes exposures to make the comparo consistent, and the nebula looked essentially the same. However, the snap of the ISO 100 picture was gone, and noise became bothersome. Some days ago I stumbled upon a camera efficiency chart, sorry I don't recall where, probably Cloudy Nights but I'm not sure because I don't practice imaging.

The ISO 100 specs were excellent but they fell very quickly when the ISO was increased, and even the jump to 200 cut a lot of performance. The boost is theoretical/artificial because if a pixel needs that much photonic energy to react, nothing short of using another sensor will make the pixel react to less photonics energy.

If I start deep-sky imaging on my own I'll probably stick to the basic sensitivity without artificial boost, exposures shouldn't be that much longer if they are neater and the specs other than ISO stay at the max.

 

Link to comment
Share on other sites

9 hours ago, souls33k3r said:

No i haven't, i'd like to stick with Pixinsight. I don't want to confuse myself with different tools.

If you don't want to confuse yourself why are you using Pixinsight??? :D

More seriously, the reason I use Photoshop is that I can see what I'm doing as I do it.

8 hours ago, souls33k3r said:

Oh i like that. I don't mind integrating PS to the mix but i know some will say use High pass or smart sharpen but tbh that destroys my stars for me. (I don't know how to make masks in PS). I suspect you played with levels and curves on this one?

 

You can use masks in Ps but most of the time I don't, I use Layers and the eraser. Take Smart Sharpen, which I use less than Unsharp Mask but in the same way. So you have the unsharpened image to start with. Make a copy layer of it, make the top layer invisible and the bottom layer active. Select the stars using Noel's Actions for Ps or, longhand, using Martin's excellent routine described on the forum's processing section. Select inverse to exclude the stars and globally sharpen the bottom layer. Make the top layer active and visible and blink it on and off to see where you prefer the sharpened bottom layer and where you don't. Erase the top layer where you do prefer the bottom layer. As a method this strikes me as beating the snake oil of mathematical mask making into a cocked hat but I'm a simple soul not given to agonizing over the square root of minus one. I just like to see what I'm doing as I do it...

Olly

Link to comment
Share on other sites

9 hours ago, souls33k3r said:

Yeah i tend not to cross that line. I go back and forth many a times and work on multiple copies (i process on my TV so how it shows up on PC monitor screen and phones and whatnot is totally different).

I used to process on my TV... Big mistake. I bought a monitor for my pc and wow, the difference is massive. 

 

Link to comment
Share on other sites

So many dittos Olly

Quote

 the reason I use Photoshop is that I can see what I'm doing as I do it.

ditto

Quote

You can use masks in Ps but most of the time I don't, I use Layers and the eraser. 

ditto

Quote

Erase the top layer where you do prefer the bottom layer

ditto

Quote

I'm a simple soul not given to agonizing over the square root of minus one. I just like to see what I'm doing as I do it...

Carole 

Link to comment
Share on other sites

I copied the image in your first post, and had a go with Astra Image and my ancient copy of Corel PhotoPaint.

The astra version was deconvoluted and contrast enhanced before mixing back in at about 98% with the original. Then curves, selective colour and selective saturation, then a tiny hint of noise reduction.

This is probably a bit too pop for most people...

image.thumb.png.c9f7dcacf79f758bcb852ced0fb8ec86.png

Link to comment
Share on other sites

9 minutes ago, Stub Mandrel said:

I couldn't resist getting some colour into the stars... I just wish I could get data like yours!

image.thumb.png.7c24e19a6fac97b896391111a1d7b947.png

Appreciate your kind words Neil :) From where i was a year ago to now, i do realise i have improved in some areas but for me learning never stops and i need to move up the chain :)

Link to comment
Share on other sites

Beating yourself up again Ahmed, there's only so much we can do in our light polluted skies even with top of the range NB filters there's no substitute for a dark sky, it's quite depressing / annoying seeing superb images taken with a DSLR from a really dark site that I can't match from my back garden using 3nm Astrodons .

You could come to one of our dark site weekends, Carole and a couple of others go, and compare data, I took some subs of M33 last time and when I compared them to the ones from home I scrapped the home taken ones.

What is it they say about pigs ears and silk purses :grin:

ATB

Dave

Link to comment
Share on other sites

1 hour ago, Davey-T said:

Beating yourself up again Ahmed, there's only so much we can do in our light polluted skies even with top of the range NB filters there's no substitute for a dark sky, it's quite depressing / annoying seeing superb images taken with a DSLR from a really dark site that I can't match from my back garden using 3nm Astrodons .

You could come to one of our dark site weekends, Carole and a couple of others go, and compare data, I took some subs of M33 last time and when I compared them to the ones from home I scrapped the home taken ones.

What is it they say about pigs ears and silk purses :grin:

ATB

Dave

If I don't beat myself up Dave, I won't grow and I sir want to grow better in this hobby. I will agree to the fact that there's no substitute to dark skies data but we make do with what we've got. 

All what I was trying to understand is how I can make my images better with the sky conditions that we have be it pre or post processing. 

I would love to join you guys some day. I've already met Carole but looking forward meeting you some day ? 

Link to comment
Share on other sites

On 28/09/2018 at 13:26, Xiga said:

I think your best chance of getting down to the nitty-gritty of what you're trying to achieve, is to start a thread on one specific target & data set. Share the data, along with your best attempt, then wait and see what others can do with the same data. When you see one that ticks all the boxes for you, then just kindly ask for the workflow and settings (P.I has many!) that were used. Armed with this info, it won't take long for you to figure out what it is specifically that you feel is holding you back.

I think @Rodd did this with quite a few data sets. They made for great threads, were very popular, and no doubt many people here learned an awful lot from them. There are a lot of highly-skilled P.I guys out there that could no doubt show you just how good your data is. @wimvb is one who certainly springs to mind.

 

This is a good idea.

I'd love to have a play with the data as well.

I'm currently imaging IC63, but  at 385mm.   I don't use Pixinsight, so I would be fascinated to compare results.

It might be worth getting 20 - 30 minutes through a blue filter.

 

 

Link to comment
Share on other sites

On 29/09/2018 at 11:17, Davey-T said:

Beating yourself up again Ahmed, there's only so much we can do in our light polluted skies even with top of the range NB filters there's no substitute for a dark sky, it's quite depressing / annoying seeing superb images taken with a DSLR from a really dark site that I can't match from my back garden using 3nm Astrodons .

You could come to one of our dark site weekends, Carole and a couple of others go, and compare data, I took some subs of M33 last time and when I compared them to the ones from home I scrapped the home taken ones.

What is it they say about pigs ears and silk purses :grin:

ATB

Dave

I have to agree about the quality of the data. I regard my processing skills as minimal, but I downloaded some free trial data from Deep Sky West and in 20 minutes produced an image with IMHO a bit of a wow factor. That was all down to the data.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.