Jump to content

ONIKKINEN

Members
  • Posts

    2,533
  • Joined

  • Last visited

  • Days Won

    8

Everything posted by ONIKKINEN

  1. This is normal, actually expected from skies as dark as yours and that camera. Your camera has a huge dynamic range which makes saturating signal very difficult. The result is that all of the faint signal is usually in the black parts of the shot. I have had images that contain on average just 25 photons per pixel per sub and there is no issue. But like you said stretching is no issue so there is no problem here with the histogram. By the way the siril autostretch function is quite aggressive, especially for low noise images like yours so when actually applying the stretch to the histogram i recommend walking the midpoint slider a bit back from what the autostretch function wants.
  2. Startools is definitely the easiest software out there, i think. The module structure makes you work with at least a workable workflow and guarantees that there is an image in the end. Other dedicated astrophotography processing software might as well be voodoo for a beginner, none of it makes any sense at all when starting out. Below is an image of what the histogra should look like after stretching, from one of my own projects: Nice curve with no defined "end" to the data, it just slowly diffuses to the black and white ends. The effect is probably small, or non existent if the person doing the processing is maybe not able to use all of the image but you have worked hard on a long integration, so taking the small steps is only a good thing. I think i may have stretched the image just a bit further to get the wispy stuff out more, and the core might be a bit better. There are no negatives to using a 32bit file for the linear part of the processing, other than a bit larger file size and maybe it draws more juice out of your PC. After stretching the added precision is not needed and you can save the image as 16-bit, if further processing software doesn't like 32-bit (like photoshop).
  3. Quick process of the unstretched file in SiriL and photoshop: Just did photometric colour calibration, stretch, deconvolution, saturation, the usual stuff. In photoshop some layer masked saturation/noise control/sharpness things. Didn't go too far on this one but i think with sharpening and maybe an HDR type process with layer masks there would be more in the image. I cant advice on how to get results out of startools, because so far i have not had success with it. My tries with Startools always end up looking like they came out of Startools, and i dont think that's a great thing. I find real issue in how the colour module just doesn't seem to get real colour results out no matter what i do, and all the other modules slowly walk the image towards what you got out of it, a posterized and mushy looking image. Some might like the look of getting every bit out of an image but i think it detracts from the image. I see that some people get much better looking results than i can get out of it, so much of this is user error in my end, but i find other software much easier to use for processing. One issue i found is that the image is 16-bit which is problematic for the faintest detail. I believe that is possibly a part of the issue with the posterized looking edges in Startools. Below is an image of the histogram in siril: There are gaps in the histogram, meaning that the image has been stretched so far as to no longer fill all the values of a 16bit image. This can lead to abrubt looking edges in faint detail instead of evenly fading away into the background like it should be. Its not super obvious in this example and this is something many people dont bother with, but there is a difference in 32bit and 16bit processing and i think that's reason enough. Why not try processing in SiriL though? Photometric colour calibration and a histogram transformation and you already have a real colour stretched image that you could then fiddle with in Photoshop or Gimp until the end of time until you get what you like. Data is great by the way. Really nice capture.
  4. @BrendanC Most of these issues above are fixed by flocking the inside of the tube, but of course if there is significant local lighting going on you need to block these somehow. The focuser drawtube light leak issue is quite small and i doubt it ruins your lights or flats unless you shine a light directly to the focuser, but this will definitely ruin your darks. The stock paint inside the tube is hardly black and will reflect some light here and there and may ruin flats, especially the area just behind the secondary mirror will reflect unwanted light back into your camera. Uncovered shiny screwheads and such also should be blackened. You can buy flocking material from FLO to line the inside of the tube with. I think it was something like 5 pounds per roll (you will probably need 2) so not one of the more expensive investments in the hobby. I used to have problems with flats but after doing many modifications, flocking the tube being one of them, now i really have no issues at all with flats.
  5. Applies for imaging too i have found. Its always worth it to travel to darker skies even if the extra travel time means you get half the time to image in the end. I have started using a bortle 4/SQM21.3 location instead of my usual SQM19.5 spot and have found that an hour from the better skies can be about the same in value as an entire night in the worse spot... Difficult to tell myself to do the extra travel (1h extra both ways) but the results are obvious so at least moonless nights will always have me travel further from now on. Getting just a couple of hours of decent subs from the darker location is more than the worse location could give me in an entire night so i waste less clear skies too. Cant change the weather, but can choose to use the weather more efficiently!
  6. Stellarium - the windows downloadable version that is - is what i mostly use to check framing on targets before planning further. You can put your scope and camera details in there and it will show you how the target fits your particular setup. Stellarium images are not very deep though and outside the usual suspects of imaging, the bright messiers and such, the others are quite lackluster and to check a deeper image of what im planning on imaging i use the legacy sky survey browser: https://www.legacysurvey.org/viewer It has many wavelengths and different catalogues and some are exposed very deep. You can easily see if your target has IFN around it for example. NINAs sky atlas and framing tool is what i use to create the sequence itself, but i have used it for discovery too. You can input parameters to the sky atlas browser and constrain what you get as a search result. I put in declination of at least 60 and galaxy or galaxy cluster of a size at least 5 arcminutes. Turns out there were a number of galaxies i had not heard of at all that i could image - with NGC 4236 as an example. Beautiful, but very dim spiral galaxy but for some reason not often imaged as images of this are hard to come by.
  7. Ideally yes, of course. Bank account not in agreement on that though 😆. Over summer break, which starts in 2 weeks ill do a number of upgrades and a new corrector is on the top of the list for sure.
  8. Just looking at the values, im not using the plugin. Check the value reported from a few subs to get a rough estimate then move the focuser slightly one way and check if it improves or not. Once it doesn't improve its set. Good seeing and another target and this takes no more than a minute.
  9. Im having difficulty reaching ideal focus with this target, i have some ideas how to improve but tips welcome! Now why is it difficult: I focus manually looking at the HFR values reported by NINA, and normally its very straight forward. 2s or 4s exposures depending on seeing guarantees that the measurement is more or less independent from high frequency seeing wobbles and is still reasonably short so it doesn't take forever. But with the coma cluster there are hundreds of galaxies, some of which are picked up as stars by NINA and they obviously are not point sources, so will impact HFR calculations drastically. They are also so faint that it varies quite a lot between subs, so HFR based focusing is difficult. I also think that as focus gets better, more and more of these faint galaxies are picked up as stars and HFR actually increases as focus improves. Judging from the reported HFR values my frames are junk (HFR of more than 4 pixels, in other targets this is go back home territory of bad) so i really cant use this (i should note that the frames can look ok or good with poor HFR though, so just the measurement is suspect). Having a newtonian i have handy diffraction spikes that can act as focus indicators quite easily, but here i run into another issue: my coma corrector, which produces uneven star images across my APS-C sized camera so in order to have a middle ground i need to use a star not in the center, but not in the edges either (not that i would want to use the edge ones, since they are not good). Solutions i have: Spend longer on focusing, with longer focus exposures to have better star images to visually gauge diffraction patterns from brighter stars. I used 15s subs and i was able to determine decent focus in a few minutes. But since my scope takes at least an hour to cool down i have to do this a couple times a night. These are painful minutes as the night is not even 5 hours long at this time of the year, but i must have good focus or the subs are a waste. Focus somewhere else and then slew back to the target. Problem is: even though my diamond steeltrack is a very good focuser, i find it can slip after slewing when the weather is cold and the critical focus area to get good diffraction spikes and star images across the screen is ridiculously small with my Maxfield 0.95x coma reducer. Once all of the scope is equalized in temperature to the ambient temperatures, this problem goes away with tensioning the focus drawtube using the tensioning screw. Before that, this is an issue. Would having an autofocuser fix this issue? Can the algorithm figure out ideal focus when the target is inherently throwing a spanner in the works? So far i have not considered autofocus because i am always scopeside when imaging and the setup is not left unattended.
  10. Try SiriL, with Lanczos-4 interpolation as the method for registration. Should produce noticeably better stars than DSS which i believe uses bilinear interpolation for its registration. As a newtonian imager myself, i will also say that your collimation might be different on the other side of the meridian, especially if you have the stock focuser, stock tube, or really stock anything in the mechanical parts of the scope that are not really up to the task of imaging for stock scopes. Guiding is also worse after a flip for me. Not sure how to fix that issue though. Maybe recalibrate guiding after the flip? Still experimenting so wont comment more on that as im just guessing myself.
  11. Was taking 60s subs of abell 1656, the coma cluster and a bright trail caught my eye. Found it amongst the subs and saw that it left what looks like a trail of some sort that persists for at least 2 frames, but i think the third one still shows a mark of the event. There was a dither in between so possibly visible for more than 3 minutes. Re-entry of a space rock or one of Elon musks toys? Comacluster trail.mp4
  12. Cygnus, and with it the summer milky way is rising again, had a look naked eye and with my 7x50 Nikons just cruising through the endless starfields. Nicely visible from a bortle 4 location without the moon in the way, but the feeling is bittersweet as Cygnus rising signals the end of the season is at hand. M13 and M3 were also quite nice from these skies with the binos, if a little small. Usually have trouble finding them but not this time from much darker skies than i am used to.
  13. Before the scope hits the tripod or some cable snags happen. If thats not happening then not maybe necessary, so the same as southern meridian crossings.
  14. My 8'' dated 2020 has the tape and some blobs of silicone, but not nearly as much silicone as this one here, i think the mirror cell in question here might be a bit of a monday morning product? The tape in mine seems a bit redundant, but at least it doesn't feel like its in the way. It doesn't really hold the mirror tightly, it just sort of loosely sits there around the mirror and the clip holders so for me i doubt it could pinch the mirror, or really hold it in place so its just useless. I can wiggle the mirror a bit on the cell if i try, but i do have to try so i guess my cell is working more or less as intended.
  15. You could do what i wrote above, connect to that with your PC and control the mount through the handset. In this configuration you can issue go-to commands from your PC and the handset, and both know where the mount is pointing towards.
  16. Like above, i think less than half of imaging resolution and similar errors in both axis is good guiding. No obvious excursions in declination and a periodic error that is in check is key here. I use the predictive PEC algorithm in PHD2 and it seems to be doing a decent job of keeping my somewhat poorly figured RA worm gear in check. But looking at the guide graph can be misleading, especially with a newtonian and you may have good guiding without round stars. That is, i believe, due to mechanical issues with most off the shelf newtonians including: Focuser slop, tube deforming, mirror cell stability issues, secondary holder issues and more. All of these issues will lead to effectively taking your scope out of collimation randomly and during the session. Your guide scope (if using a guide scope) can still report decent guiding because its not aware of the other issues, unless the guide scope attachment method is also sub par (also common with newtonians). Experts here recommend OAGs regularly for newtonians for these reasons, and im beginning to see why since some issues still persist for me even though i dont think they are related to guiding exactly. I will probably make the jump from a guide-scope to an OAG some time in the near future because i see "good guiding" but still varying decrees of issues with the actual subs.
  17. You can also use the USB port (assuming you have one?) on the mount or hand controller to connect to a PC. If you connect to the hand controller you can use both the hand controller and some software with your PC to control the mount. In this configuration EQMOD does not work. If you connect to the mount USB port you can use EQMOD to control the mount, but now the hand controller cannot be used (or be plugged in). With the handcontroller route you will need to use Skywatcher ASCOM drivers to control the mount, and need to download and install working Prolific USB to serial drivers. The one on Skywatchers website did not work for my AZEQ6, manufactured in 2022, but does for my EQM35 manufactured in 2020 so i believe Skywatcher just hasn't updated their site for a while. The benefit of either of these is that you do not need to buy the extra cable. But the dedicated cable is probably a bit more reliable and most use that to connect to a Skywatcher mount.
  18. Yours is very nice! I kinda like the "ethereal" look to the haze in your shot. Almost makes it look like a galactic halo sort of thing, really nice touch 👍. You could have spun it as a choice in processing rather than something you wanted to get rid of and i would have believed you 100%. I do agree seeing yours that my reds are not as vibrant, but yours still doesn't look oversaturated at all to me. Maybe ill have another go, i still have the data somewhere in the terabytes of data i have stored in one of my hard drives. For me this target is available round the year at 60 degrees north, one of the perks of northern astrophotography that some targets are always available. Well maybe a bit too low at its lowest, but visually no problem and i check this every time i convince myself to put an eyepiece in rather than a camera.
  19. I sometimes image them as secondary targets like i did with this one, doesn't need a long integration and the worst thing that could happen is loss of 30 minutes. Most dont look nearly as interesting as the double cluster though.
  20. There are several bright stars within a few degrees of NGC 4236 and some of these can be in a sweet spot where they just about hit the edge of your coma corrector and cause reflections off the lenses, at least thats what i think happened for me (i am also imaging the galaxy). See the rainbow reflection thingy in the bottom left? That happens when i have a bright star at just the right distance from my image center. Also had the same happen on a few other targets so its not unique to this target, but i do find that the range of distances is quite narrow. Just a bit closer or further and this goes away, but it can actually be worse than it is in my shot if i get just a bit further away from the bright stars. Now it doesn't really look the same to me, yours looks more like a flats/gradient removal issue rather than a well defined halo like mine, but something like this could play a part as well.
  21. If i understood you correctly with how it works, i think this kind of automated process on subs of various quality would be almost revolutionary for astrophotography processing? You are a seemingly endless well of knowledge and wisdom when it comes to this hobby, thanks for sharing some of that with us mere mortals.
  22. This workflow is more or less what i do in Siril to my data. It looks scary but once you get used to it its just another step in the voodoo magic of astrophoto processing and its not that big of a hassle in the end. Splitting has some very small benefits to debayering and then binning, but everything in AP processing is done to get a small gain so why not do it i think. I dont blink my data, or really visually inspect the subs in any way for that matter. I just use statistics measured from the subs to weed out the bad ones. I use measured FWHM, star roundness, background levels and SNR to make the decisions. In Siril i use the dynamic PSF function to create a plot of all the subs and its just a few clicks to determine what gets kept and what gets thrown out. I think in PI you could use the subframe selector tool to do the same, but with much better stats than what Siril can provide. Blinking can be fun to see asteroids and other anomalies in the subs, but other than that i dont give it much value for determining whether the sub is kept or not. Sometimes after a night i see that almost all the subs are less than ideal and they get thrown out, but with long integrations i care less about losing a few hours of data since its nowhere near done anyway. I have 2 in progress projects that are at 15+ hours so far and i estimate i may need to double that to reach a result that i really like, so whats losing a few hours when im looking at 15+ hours of more in the end? For the most demanding objects i see that i not only need a long integration, but also will need all of the data to be better than average to get a sharp and deep image in the end.
  23. Actually i have not used Pixinsight either, the script i use is for Siril. The command in question for Siril is seqsplit_cfa "your sequencename". I would be surprised if PI doesnt have the same feature though.
  24. Right, that altitude will have an effect but its probably not the thing you're after. I have 2 different T-rings for my 550D and both are loose fitting and i can rattle the camera in them a bit. I fitted some aluminium tape around the "groove" which the camera flange attaches to and they were snug afterwards. If you have the same kind of T-rings its an easy fix. Not sure how much that helps if there is tilt but it wont hurt and will make subsequent troubleshooting easier with that out of the way
  25. Rising Cam to the rescue again? Be interesting to see if they lowball ZWO and QHY prices like with the IMX571 cameras.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.