Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

Beginner questions :)


Recommended Posts

1 minute ago, Starwiz said:

You could always try some daytime experiments to get an idea where the focus is.

Point your scope at something on the horizon, then try focusing using live view.  With all the extra light, you shouldn't have a problem seeing something with live view.

Once, you've confirmed it can actually achieve focus, then try again on a clear night.

John

I actually did exactly that this morning, I've managed to get focus quite easily with one way I didn't thought of before, I just hope it will also be able to focus the same on the stars :)

  • Like 1
Link to comment
Share on other sites

3 hours ago, msacco said:

I actually did exactly that this morning, I've managed to get focus quite easily with one way I didn't thought of before, I just hope it will also be able to focus the same on the stars :)

There shouldn't be any difference.  If you focus on something on the horizon, it's effectively at infinity as far as the camera is concerned.  So, it will just be a matter of getting the exposure right and tweeking the focus.

John

Link to comment
Share on other sites

1 hour ago, Starwiz said:

There shouldn't be any difference.  If you focus on something on the horizon, it's effectively at infinity as far as the camera is concerned.  So, it will just be a matter of getting the exposure right and tweeking the focus.

John

Yep, that seems to work well! :)

So I managed to get some photos of the lagoon nebula, I didn't took much photos of it as the main purpose was for testing, I tried processing the image but it was so hard, I leave in a fairly dark town high 'relatively' low amount of light pollution, still the images looks like they really suffer from light pollution.

I did 800 ISO 20 seconds subs. Took around 40~ frames, of these only 29 were good enough.

I know 29 frames is really barely nothing to get good images, I also didn't take bias,darks,flats simply because I only wanted to first see that I manage to get anything.

I tried processing the image but it didn't work that well, I'm aware of the fact that I probably won't get much with only 29 light frames, but I wonder if maybe some of the more experienced guys here would've been still able to get some nice results out of it.

After stacking the image in deep sky stacker, the image was almost completely white, I tried playing with the settings but that didn't change much, wondering if I'm doing something wrong in the settings.

If anyone feels like it, here are the 29 RAW photos:

https://drive.google.com/open?id=1aequL0cJQLllTAeQhJkt3QFUKXiWbW-F

Maybe someone can get anything decent out of it? (by my level of 'decen't, I mean bad result for others, but something that's still cool to see).

Thanks for all the help! :)

  • Like 2
Link to comment
Share on other sites

30 minutes ago, msacco said:

Maybe someone can get anything decent out of it? (by my level of 'decen't, I mean bad result for others, but something that's still cool to see).

I don't like working with uncalibrated data, but here is quick attempt:

image.thumb.png.30363118039286bc017a1bbf8f106173.png

Lack of flats really show - there is enormous vignetting that is hard to remove. This is quick process in StarTools after stacking in DSS - I tried to wipe the background and remove vignetting, but as you can see, it did not work very well, still I kind of like the effect.

There is some nebulosity showing - there is something to look at :D

 

Link to comment
Share on other sites

4 minutes ago, vlaiv said:

I don't like working with uncalibrated data, but here is quick attempt:

image.thumb.png.30363118039286bc017a1bbf8f106173.png

Lack of flats really show - there is enormous vignetting that is hard to remove. This is quick process in StarTools after stacking in DSS - I tried to wipe the background and remove vignetting, but as you can see, it did not work very well, still I kind of like the effect.

There is some nebulosity showing - there is something to look at :D

 

Wow! This is so much better than what I managed to get!

How did the after-stacking image looked like to you? For the it was like almost plain white.

Can you briefly state what settings did you use so I'll be able to try getting the same result? :x

Link to comment
Share on other sites

It did look rather whitish :D

image.thumb.png.8adca6a9f32cc484f4dcb5224ad45910.png

I used super pixel mode as debayering method:

image.png.824831f8a1e2f59b3811e5538337c0af.png

And these were stacking parameters:

image.png.d8ad4135046503304752df4c6b52461e.png

After that I loaded image in StarTools (trial - I don't use it to process images otherwise but it can attempt to remove vignetting and that is why I used it), did basic develop, remove vignetting and color cast and did color balance.

Took a screen shot of the result at 50% zoom (trial version won't let you save image for further processing).

Link to comment
Share on other sites

Just now, vlaiv said:

It did look rather whitish :D

image.thumb.png.8adca6a9f32cc484f4dcb5224ad45910.png

I used super pixel mode as debayering method:

image.png.824831f8a1e2f59b3811e5538337c0af.png

And these were stacking parameters:

image.png.d8ad4135046503304752df4c6b52461e.png

After that I loaded image in StarTools (trial - I don't use it to process images otherwise but it can attempt to remove vignetting and that is why I used it), did basic develop, remove vignetting and color cast and did color balance.

Took a screen shot of the result at 50% zoom (trial version won't let you save image for further processing).

That's so awesome, thank you so much!

I actually used your image and got the following results:

image.png.1fd29e2ad765a76d4e3a72109f6b6d7d2.jpg.36b8f74aacfddb7327bfb6aecf2c83e7.jpg

Obviously by getting more nebulousity color out I increased the noise tremendously and that's reflected in the stars and more parameters, but I find it kinda cool, we can see some more details now :)

In close saturday I'll be going to a really dark site, so hopefully I'll be able to take much much better shots!

  • Like 1
Link to comment
Share on other sites

Well I think that is pretty good for a first try, much better than my first attempt.

Rome wasn't built in a day as they say, (do they have that expression in Israel?).  In other words, you are trying to do a lot of new things for the first time.  You can't expect to get everything right first time.  Hopefully it will be a few steps forward and some back each time and you'll gradually progress.

Well done.

Carole 

  • Like 1
Link to comment
Share on other sites

image.png.010ac74822f189c6b990944aef8a89ca.png

Here is another attempt - processing in Gimp. I did "custom" vignetting removal - kind of works. Here is what I've done:

Open stacked image in Gimp 2.10 (use 32 bit per channel format). Copy layer. Do heavy median blur on top layer (blur radius of about 90px or something like that) - leave next two parameters at 50% and use high precision.

Set layer mode to division, and put layer opacity to something low like 10%. Merge two layers. Now just do levels / curves and you should get fairly "flat central region.

Link to comment
Share on other sites

Just now, carastro said:

Well I think that is pretty good for a first try, much better than my first attempt.

Rome wasn't built in a day as they say, (do they have that expression in Israel?).  In other words, you are trying to do a lot of new things for the first time.  You can't expect to get everything right first time.  Hopefully it will be a few steps forward and some back each time and you'll gradually progress.

Well done.

Carole 

Yeah we do, that's a pretty global phrase isn't it? :)

I'm actually not expecting much at all, I mean, I do hope and try to get the most out of what I do, but I'm really happy with my current first attemp, it's pretty bad on the AP scale I guess, but it's still something.

1 minute ago, vlaiv said:

image.png.010ac74822f189c6b990944aef8a89ca.png

Here is another attempt - processing in Gimp. I did "custom" vignetting removal - kind of works. Here is what I've done:

Open stacked image in Gimp 2.10 (use 32 bit per channel format). Copy layer. Do heavy median blur on top layer (blur radius of about 90px or something like that) - leave next two parameters at 50% and use high precision.

Set layer mode to division, and put layer opacity to something low like 10%. Merge two layers. Now just do levels / curves and you should get fairly "flat central region.

That's really awesome as well, thanks for all the explanations, really appreciate that!

One question I can't understand, I see the image being flipped often like that, when it happens and why? ^_^

Thanks again!! I'll keep trying :)

Link to comment
Share on other sites

3 minutes ago, msacco said:

One question I can't understand, I see the image being flipped often like that, when it happens and why? ^_^

It's got to do with people programming different programs. Two different conventions on coordinate system orientation.

In "normal" math we are used to X axis being to the right and Y axis being to the up (positive values increasing). Screen pixels work a bit different - top row of pixels on screen has Y coordinate 0 and it increases "downwards" (next is row 1, then row 2 and so on) - so there is "flip" of Y coordinate.

If one simply loads file (that should be row 0 then row 1 then row 2, etc ...) and displays directly on the screen you get one vertical orientation. If one loads file into "math" coordinate system - it will be reversed in Y direction.

How software operates depend on people who programmed it - some use math coordinate space and other just "dump" rows onto screen - hence Y flip between programs - but it is not a big deal as Vertical Flip operation is always present and it is "non destructive" - it does not change any pixel values it just reorders them (same goes for horizontal flip and 90 degree rotations).

Link to comment
Share on other sites

1 minute ago, vlaiv said:

It's got to do with people programming different programs. Two different conventions on coordinate system orientation.

In "normal" math we are used to X axis being to the right and Y axis being to the up (positive values increasing). Screen pixels work a bit different - top row of pixels on screen has Y coordinate 0 and it increases "downwards" (next is row 1, then row 2 and so on) - so there is "flip" of Y coordinate.

If one simply loads file (that should be row 0 then row 1 then row 2, etc ...) and displays directly on the screen you get one vertical orientation. If one loads file into "math" coordinate system - it will be reversed in Y direction.

How software operates depend on people who programmed it - some use math coordinate space and other just "dump" rows onto screen - hence Y flip between programs - but it is not a big deal as Vertical Flip operation is always present and it is "non destructive" - it does not change any pixel values it just reorders them (same goes for horizontal flip and 90 degree rotations).

Yeah of course, I was just curious about it 😁

Thanks for the detailed explanation once again. :)

Link to comment
Share on other sites

I have finally found a stacked and calibrated image of the Horsehead I took years ago with a DSLR.  (I might even have the stacked RAW data but that would be a real Pain to upload).

Would you like me to post up the stacked and calibrated Horsehead for processing practice?

Carole 

Link to comment
Share on other sites

1 hour ago, carastro said:

I have finally found a stacked and calibrated image of the Horsehead I took years ago with a DSLR.  (I might even have the stacked RAW data but that would be a real Pain to upload).

Would you like me to post up the stacked and calibrated Horsehead for processing practice?

Carole 

That would be great! The more practice the better :)

Link to comment
Share on other sites

Horsehead Nebula captured with Canon EOS 450D Modified for AP, captured in November 2011 from

LP location (SE London Uk).  This was captured and dithered in APT by the looks of the file name.  I often use this image for processing demonstrations at astrophotography talks, probably the only reason I still have it to hand.  

Post up your result for CC and advice. 

Carole 

 

Horsehead 29.11.11 20 x mons 800 ISO ED120 APT dither.tif

Edited by carastro
Link to comment
Share on other sites

  • 3 weeks later...

 

On 23/07/2019 at 13:08, carastro said:

Horsehead Nebula captured with Canon EOS 450D Modified for AP, captured in November 2011 from

LP location (SE London Uk).  This was captured and dithered in APT by the looks of the file name.  I often use this image for processing demonstrations at astrophotography talks, probably the only reason I still have it to hand.  

Post up your result for CC and advice. 

Carole 

 

Horsehead 29.11.11 20 x mons 800 ISO ED120 APT dither.tif 70.13 MB · 8 downloads

So kinda bumping an old thread right now, but these are my results of that:

HORSEHEAD_FINAL2.thumb.jpg.dd5699d6ebbc3928e1e8e32dda2855ab.jpg

 

I don't really like the results honestly, I don't feel like they're good enough, people are able to get some amazing results with my data, but I never manage to get much our of it.

Currently I'm mostly using ABE/DBE, BG naturalization/calibration/photometric calibration, multiscale linear transform, SCNR, HDR multiscale transform if it fits, and of course histogram, curves transformations and screen transfer.

Still looking into learning more tools, but that's pretty much what I know so far.

I also got some M16 data from a friend and got to this:

Deddy_M16_soft.thumb.jpg.103e24900ab0cef232e7ec72170e1189.jpg

I believe it was taken with narrowband, once again, feels like I could get some better results, I'm just not really sure how.

So if you or anyone else got some tips on what I could improve from my images, that would be really awesome.

Also, if you got your final version of horse nebula with this data, that would also be very interesting to see! :)

And of course, thanks for letting me practice with this!

Link to comment
Share on other sites

That's not bad at all for some-one just starting out with imaging.  I think you are expecting too much of yourself, imaging is  a long learning curve that can take some time to start to get reasonably decent results.

This is my process with the image above which I captured after a year of imaging.  If I processed it today I could probably get a much better result.

spacer.png

Link to comment
Share on other sites

On 12/08/2019 at 01:54, carastro said:

That's not bad at all for some-one just starting out with imaging.  I think you are expecting too much of yourself, imaging is  a long learning curve that can take some time to start to get reasonably decent results.

This is my process with the image above which I captured after a year of imaging.  If I processed it today I could probably get a much better result.

spacer.png

Thanks! I know that I'm expecting too much, but I simply can't stand the fact that I can do better, but don't manage/don't know how to achieve that. Of course, that pushes me to learn harder.

So hopefully I'll manage to get much better in the future, thanks a lot for the help!

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.