Jump to content

manually choosing star seems to improve my guiding


Recommended Posts

and m27 is bright lol. on the first plate solve 5sec image i said wow out loud lol

flaming skull looks ok for 90 mins ish, will try and bring out skull more later.

next session im resetting phd2 and starting from scratch again. i think something odd is happening with auto star selection. guiding goes great for a little while then gets worse. possibly as clouds effect it. i could get snr up to mid 50s which seemed to help except when it didn't. also 3 seconds never seemed to improve things, but 0.5sec did. except when it didn't :(

 

image.thumb.jpeg.ae8281b629ea490f1c68cf753cc86180.jpeg

  • Like 1
Link to comment
Share on other sites

Posted (edited)

so  watched Cuiv's video linked above again other day and pretty sure i do everything i can that he covered.

im watching a video from Auckland astronomical society and will make some changes to what i currently do:

1 use sharp cap fwhm focus tool on my guide scope (if its available in the free version, asi app otherwise). it seems "slightly de-focused" stars  are not better than tightly focused ones.

2 east heavy balance. going to go back to 'perfect balance' as i don't think my RA with clutch deactivated moves freely enough to do this terribly accurately. 

3 try and get my counter weights further up the shaft and closer to RA axis. though i might not be able to move them too much and still be able to balance in RA due to my payload weight. 

4 reset phd2 and start from scratch.

5 maybe bin my guide scope dew heater as it has one setting of very warm. maybe swap it for my main scope's dew heater. 

5a set my min snr to 20 up from 6. 

6 sit back and witness incredibly consistent guiding < 0.5"

7 drink a can of Fosters upside down in honor of AAS. auckland is legally close enough to australia. 

Edited by TiffsAndAstro
Link to comment
Share on other sites

The colour seems fine typical of this target. If you want to be more precise, the blue areas tend to be O3 rich so a slightly more teal colour.

It also benefits from a two stage process, process the core slightly to retain the detail, and stretch the nuts off the outer region then blend. But you need lots and lots of time to get into the second, maybe further extended shells or "wings".

Edited by Elp
  • Like 1
Link to comment
Share on other sites

Just now, Elp said:

The colour seems fine typical of this target. If you want to be more precise, the blue areas tend to be O3 rich so a slightly more teal colour.

trying another verions at the moment with some drizzle shizzle. and not removing green noise so early in my development process. this is a very very bright target i think :)

Link to comment
Share on other sites

Posted (edited)
6 minutes ago, Elp said:

Lots of variations on Astrobin, this was my first set of data capture:

 

 

lol yes yours is somewhat better than mine :) i can one of the red jets within the main disc of mine but it disapears as soon as it gets to the edge :) yours are are so clear and extended they look like lazer death rays. i also have a faint hint of the outlying nebulosity but nothing like yours. 

also your stars and their colours are really nice. 

Edited by TiffsAndAstro
Link to comment
Share on other sites

Give it around 6 hours, you'll have more. It's one of the benefits of filters, the emission gasses just jump out from the background sky.

Stars you have to really bump up the saturation to get the colour.

Edited by Elp
  • Like 1
Link to comment
Share on other sites

5 minutes ago, Elp said:

Give it around 6 hours, you'll have more. It's one of the benefits of filters, the emission gasses just jump out from the background sky.

Stars you have to really bump up the saturation to get the colour.

Yours is narrowband? Wow I just assumed it was broadband due to the range of  colours. I doubt I'm going to be able  to develop a DNb version to be as nice looking colours as yours. Let alone your extra detail and drama.

Dual NB filter is on my list. Will be an interesting comparison to this bb version of mine. Whether I'd get this up to 6 hours or not first I'm not sure..

  • Like 1
Link to comment
Share on other sites

Mines actually done in mono, the stars I did with a 533 in just over an hour with simply a luminence filter (UV/IR cut). I've seen people do similar or better detail in less time, even in OSC, but sky LP could have been the contributing factor.

  • Like 1
Link to comment
Share on other sites

13 minutes ago, Elp said:

Mines actually done in mono, the stars I did with a 533 in just over an hour with simply a luminence filter (UV/IR cut). I've seen people do similar or better detail in less time, even in OSC, but sky LP could have been the contributing factor.

I was just a bit shocked at how "natural" looking it is. But also incredibly detailed.

 

  • Like 1
Link to comment
Share on other sites

With respect to guiding, it’s easy to get hung up on the PHD stats. Before that happens, examine the subs, if you have round stars but they are perhaps bigger than they should be, in this day and age they (and your target object) can be expertly sharpened with software, the AI driven Xterminator suite of tools being one of the popular choices, but there are others.

Sure, you should aim to get your guiding pretty much as good as it can be but IMHO it is not as important to producing a decent image as it once was.

With regard to M27, the outer shells do need time, here is my effort, 345 mins with a F2 RASA8 and dual band NBZ filter from my Bortle 5/6 location. I still needed to use masked layers to pull out the outer shells.

IMG_1219.thumb.jpeg.c963eeb4b1268891726f46b1d2f77b58.jpeg

  • Like 4
Link to comment
Share on other sites

43 minutes ago, tomato said:

With regard to M27, the outer shells do need time, here is my effort, 345 mins with a F2 RASA8 and dual band NBZ filter from my Bortle 5/6 location. I still needed to use masked layers to pull out the outer shells.

RASA is cheating. That's about 6 days worth of integration with a normal scope🤣

  • Like 1
  • Haha 2
Link to comment
Share on other sites

49 minutes ago, tomato said:

With respect to guiding, it’s easy to get hung up on the PHD stats. Before that happens, examine the subs, if you have round stars but they are perhaps bigger than they should be, in this day and age they (and your target object) can be expertly sharpened with software, the AI driven Xterminator suite of tools being one of the popular choices, but there are others.

Sure, you should aim to get your guiding pretty much as good as it can be but IMHO it is not as important to producing a decent image as it once was.

With regard to M27, the outer shells do need time, here is my effort, 345 mins with a F2 RASA8 and dual band NBZ filter from my Bortle 5/6 location. I still needed to use masked layers to pull out the outer shells.

IMG_1219.thumb.jpeg.c963eeb4b1268891726f46b1d2f77b58.jpeg

What concerns me most is not so much my stars roundness but the same effect on non star data, if that makes sense?

Fantastic image by the way.

Link to comment
Share on other sites

23 minutes ago, TiffsAndAstro said:

What concerns me most is not so much my stars roundness but the same effect on non star data, if that makes sense?

Fantastic image by the way.

Thanks. BlurXtermintor  can sharpen both stars and/or the target subject. It’s not to everyone’s taste, but like all processing tools, it needs to be used with care and consideration. Personally I wouldn’t be without it.

  • Like 1
Link to comment
Share on other sites

Posted (edited)
14 minutes ago, tomato said:

Thanks. BlurXtermintor  can sharpen both stars and/or the target subject. It’s not to everyone’s taste, but like all processing tools, it needs to be used with care and consideration. Personally I wouldn’t be without it.

Yeah I'd need pi first :)

Ive played with star resynthesus in siril and didn't like results. I'm sure bxt is much better.

I've never used deconvolution in siril which I think does something similar (but not like as well as bxt either) because of that.

also, ideally, I'd like to at least try to address the problem rather than fix it in software, but I'm not a zealot in that regard.

it's very hard for me to visualise what difference going from eg 1.4" rms to say 0.8" rms would make visually but my images seem a bit smearey, Vaseline smeared lens, which I'm guessing is at least partially down to this. Not purely image scope focus which I think is decent now. But not perfect. I am looking at the £60 auto focuser from my best friend Ali.

Edited by TiffsAndAstro
  • Like 1
Link to comment
Share on other sites

46 minutes ago, TiffsAndAstro said:

Yeah I'd need pi first :)

Ive played with star resynthesus in siril and didn't like results. I'm sure bxt is much better.

I've never used deconvolution in siril which I think does something similar (but not like as well as bxt either) because of that.

also, ideally, I'd like to at least try to address the problem rather than fix it in software, but I'm not a zealot in that regard.

it's very hard for me to visualise what difference going from eg 1.4" rms to say 0.8" rms would make visually but my images seem a bit smearey, Vaseline smeared lens, which I'm guessing is at least partially down to this. Not purely image scope focus which I think is decent now. But not perfect. I am looking at the £60 auto focuser from my best friend Ali.

Pixinsight is a "unique" software and takes some time to learn, but is worth the price many times over. :)

  • Like 1
Link to comment
Share on other sites

32 minutes ago, Pompey Monkey said:

Pixinsight is a "unique" software and takes some time to learn, but is worth the price many times over. :)

I'm sold on it, just need £400 for it and plugins

Link to comment
Share on other sites

1 hour ago, TiffsAndAstro said:

smearey

One of the main reasons is because you're not spending time on target despite me advising it over and over. Think about this like this, imagine you have an object and it's form is made up of individual led lights (like a grid making up the shape), now this object is a bit of a distance away from your camera. In order to resolve detail you need to capture lots of instances of the led light photons on your sensor. If you're only capturing a few, at best you'll only resolve a form which is noisy. Another way to see this visually, stand in a room with the lights off, maybe with only diffuse dim street light coming through blinds or curtains from the window so you can attempt to see something. Try and take a photo, first a short exposure of say a corner door in the room which is being illuminated by the outside light, and gradually increase the per image exposure until the pitch blackness starts to resolve details. You'll see the photos which collect more light (longer exposure) will eventually resolve better detail, and you'll also see the noise level decrease to a point. It's essentially the same principle. You need more signal.

No amount of software will help with lack of data, you can likely get a result, but if you want detail it needs to be in the data.

  • Like 1
Link to comment
Share on other sites

15 minutes ago, Elp said:

One of the main reasons is because you're not spending time on target despite me advising it over and over. Think about this like this, imagine you have an object and it's form is made up of individual led lights (like a grid making up the shape), now this object is a bit of a distance away from your camera. In order to resolve detail you need to capture lots of instances of the led light photons on your sensor. If you're only capturing a few, at best you'll only resolve a form which is noisy. Another way to see this visually, stand in a room with the lights off, maybe with only diffuse dim street light coming through blinds or curtains from the window so you can attempt to see something. Try and take a photo, first a short exposure of say a corner door in the room which is being illuminated by the outside light, and gradually increase the per image exposure until the pitch blackness starts to resolve details. You'll see the photos which collect more light (longer exposure) will eventually resolve better detail, and you'll also see the noise level decrease to a point. It's essentially the same principle. You need more signal.

No amount of software will help with lack of data, you can likely get a result, but if you want detail it needs to be in the data.

It's not that I don't believe you. And I can see reduced noise and some more details in my crescent nebula as I've added hours. It's still not very nice lol.

Link to comment
Share on other sites

Once you have the data it takes patience and very fine and subtle adjustments to eek out details and appealing colour. I often spend at least 3-6 hours processing a single image, and return the next day before I'm done with them.

  • Like 2
Link to comment
Share on other sites

Just to reinforce what Elp is saying, I feel like you have too little data, and are trying to push it too hard to show objects which are beyond what you have currently captured. You're also fighting LP in Bortle 5/6, so you need more data than someone in bortle 4 to get the same result.

As another point to what Elp suggested, have you imaged M51? If so, trying matching this image of 255 hours of data !

In the past 8 days I have had 6 clear nights, 4 of those in a row. I spent almost the whole time gathering data on M27 during Nautical and Astro dark, and back to Nautical Dark in the early hours, including through the waning Moon (starting at circa 60% if I recall).

In all, I've captured 341 usable 180s frames in Bortle 4, with just a UV/IR cut filter on my 585mc, and this is the result in both a starless and star filled image.

Those outer shells were visible after the first night of circa 3 hours of data, but no way could I bring them out since they were too noisy. Now, at 17 hours of data....well, I was expecting more if I'm honest. Although it's a quick process and I've lost details in the core, I've really pushed my data hard to get this result. I have a few more clear nights forecast this week, so I might shoot some Ha and OIII with a dualband filter to add to it before doing a final, more careful process and as a HaRGB (or HaOIIRGB if it's possible, not that I've done it before!).

Processing is by far the hardest part of imaging, and it makes or breaks your image. If you had concentrated on a single target for all of your sessions, the noise reduces, the details and fainter stuff come through, and you have a better final image. Quality over Quantity is the name of the game. I learnt the hard way too, I used to think 8 hours was a lot of data. Now, I'm usually between 12-20 hours every time.

image.thumb.png.ebd3976ff82471cb9a583e2eafa907e3.png

4 hours ago, Clarkey said:

RASA is cheating. That's about 6 days worth of integration with a normal scope🤣

And the rest when shooting in RGB it seems!

4 hours ago, TiffsAndAstro said:

ve played with star resynthesus in siril and didn't like results. I'm sure bxt is much better.

I've never used deconvolution in siril which I think does something similar (but not like as well as bxt either) because of that.

Star Resynthesis really sucks, it never gave me a good result and it actually removed stars from the image. I would avoid it entirely.

Deconvolution is good if you have a good amount of data, otherwise it'll end up making your image very noisy. You can also try Wavelets (usually used for planetary processing) but you'll get a similar result.

3 hours ago, TiffsAndAstro said:

I'm sold on it, just need £400 for it and plugins

Yeah, this put me off too. But you only need Blur and Noise XT. Instead of StarXT you can use Starnet2 which is free (I do this). All of the XT tools also come on a free trial, as does Pixinsight.

PI seems to have a reputation for being complicated, but I haven't found that at all. In fact, I don't know what all the fuss is about! I only use Siril for stacking now.

  • Like 1
  • Thanks 1
Link to comment
Share on other sites

Posted (edited)

This i

8 hours ago, WolfieGlos said:

Just to reinforce what Elp is saying, I feel like you have too little data, and are trying to push it too hard to show objects which are beyond what you have currently captured. You're also fighting LP in Bortle 5/6, so you need more data than someone in bortle 4 to get the same result.

As another point to what Elp suggested, have you imaged M51? If so, trying matching this image of 255 hours of data !

In the past 8 days I have had 6 clear nights, 4 of those in a row. I spent almost the whole time gathering data on M27 during Nautical and Astro dark, and back to Nautical Dark in the early hours, including through the waning Moon (starting at circa 60% if I recall).

In all, I've captured 341 usable 180s frames in Bortle 4, with just a UV/IR cut filter on my 585mc, and this is the result in both a starless and star filled image.

Those outer shells were visible after the first night of circa 3 hours of data, but no way could I bring them out since they were too noisy. Now, at 17 hours of data....well, I was expecting more if I'm honest. Although it's a quick process and I've lost details in the core, I've really pushed my data hard to get this result. I have a few more clear nights forecast this week, so I might shoot some Ha and OIII with a dualband filter to add to it before doing a final, more careful process and as a HaRGB (or HaOIIRGB if it's possible, not that I've done it before!).

Processing is by far the hardest part of imaging, and it makes or breaks your image. If you had concentrated on a single target for all of your sessions, the noise reduces, the details and fainter stuff come through, and you have a better final image. Quality over Quantity is the name of the game. I learnt the hard way too, I used to think 8 hours was a lot of data. Now, I'm usually between 12-20 hours every time.

image.thumb.png.ebd3976ff82471cb9a583e2eafa907e3.png

And the rest when shooting in RGB it seems!

Star Resynthesis really sucks, it never gave me a good result and it actually removed stars from the image. I would avoid it entirely.

Deconvolution is good if you have a good amount of data, otherwise it'll end up making your image very noisy. You can also try Wavelets (usually used for planetary processing) but you'll get a similar result.

Yeah, this put me off too. But you only need Blur and Noise XT. Instead of StarXT you can use Starnet2 which is free (I do this). All of the XT tools also come on a free trial, as does Pixinsight.

PI seems to have a reputation for being complicated, but I haven't found that at all. In fact, I don't know what all the fuss is about! I only use Siril for stacking now.

This is an amazing post Ty.  I won't be getting 250 hours of data on anything ever.

I know I need more data on a single target but with clouds and trees it's tricky. 

Also trying a variety of targets that are all new to me helps me get a handle on how they might look if I did.

also, i was hoping to use short summer nights to fix issues like my guiding, tube slop (which is mostly gone but might have returned) and even polar alignment accuracy/rig stability. im not keen on getting say 20 hours on a target if my issues aren't fixed. it would feel like a waste.

Edited by TiffsAndAstro
  • Like 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.