Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

ONIKKINEN

Members
  • Posts

    2,422
  • Joined

  • Last visited

  • Days Won

    5

Everything posted by ONIKKINEN

  1. Had to look the scope up, but seems like the scope is a repackaged and remarketed powerseeker 114, which is one of the worst value scopes money can buy (for a lot of people). The powerseeker name is so radioactive it seems Celestron in their deviousness has come up with alternative naming schemes for it...
  2. The reason i think calibration has worked ok is that the extremes of illumination in your flat and light appear to be corrected without anything funny going on in the corners. Below a quick example with the STF in histogram mode in Siril and rainbow false color rendering: I would expect the extremes to have something wrong with them in the calibrated image if flats were up to no good, but it looks fairly even to me and this method of extreme stretch and a false color rendering reveals the tiniest issue possible in gradients and flats. These are JPEGs of course but it works ok for this comparison purpose since the image is stretched. I suppose it could be just a coincidence that the 2 images were oriented the same way with up, in terms of towards the zenith, being oriented similarly in both cases? But light leaks could throw a wrench in the works too. I sometimes have a light leak from the back of the scope by choice, because i want to have ventilation on the mirror if its still cooling so i choose the lesser of 2 evils by allowing light leaks in but keeping the mirror ventilated. Most of the time with light leaks from the rear of the scope the effect is similar to a slight overcorrection of flats where corners appear brighter than they should (something missing from your M63 for example). If you dont have any kind of cover behind the mirror keeping light out, that might be a good idea to try next time. Hard to say from these images, could be an issue could be a coincidence. If the gradient is removable with DBE/something else (Seems to work ok on the JPEG) then i would probably be inclined to ignore the issue. You could have a light leak in your dark too by the way, but i am assuming you have checked that already?
  3. You sure its not just a gradient? Looks like a fairly normal linear sky gradient to my eyes and not good old newtonian flat headaches. Narrowband Ha would be largely unaffected because the sky is much darker at Ha wavelenghts so less gradient overall. The uncalibrated stacks are a poor comparison when autostretched, because in those cases the shadow reference used by the autostretch would be from the vignetted corners instead of the sky gradients. Vignetting likely being significantly more severe than the gradient leads to the gradient being invisible on an uncalibrated autostretched image. Maybe try cropping the images so that only the fully illuminated center is visible and see if they still look different?
  4. Hi, Sorry to tell you that the Astromaster 130 is absolutely not a scope you want to do astrophotography (of any type) with. The mirrors in these are highly spherical so will provide soft and detailless lunar/planetary images. For deep sky you would need a tracking mount which will cost 5x the scope at least so i am assuming out of budget. The mount under the astromaster 130 is a crime and will provide no stability for imaging or viewing. Overall you should expect a frustrating experience with it. I have no personal experience with the other option, but since it is a longer focal ratio scope i would be inclined to believe that it provides better sharpness for lunar/planetary even with a spherical primary (if it has one, no idea). The mount is still junk though and youll have to fight it to get anything done. I am not saying you should give up, but its time for a reality check and to think this through. What kind of imaging are you after? If you have a tight budget you should be extremely cautious of wrong purchases which i think these are.
  5. That estimate is way too low. Enthusiast setups are often over 10K here on earth! 100k>couple million maybe but 10K i really dont see happening unless its some small system like a Samyang 135 taped to a cubesat that will deorbit in a week.
  6. Amazing detail and processing, top tier image right there!
  7. This probably isnt the answer you are looking for, but i use one of these: https://www.amazon.co.uk/GL-iNet-GL-MT300N-V2-Converter-Pre-installed-Performance/dp/B073TSK26W/ref=sr_1_3?crid=3KKIP06AYMDDQ&keywords=mini+router&qid=1684697239&sprefix=mini+router%2Caps%2C274&sr=8-3 Im using a win10 mini-pc and never had any connectivity issues, has really worked without issue ever since i set it up the first time.
  8. You can open the sequence that the OSC_preprocessing script creates to do manual stacking and do the decisions yourself, along with getting to use the plot tab with some extra work. For the subject of this thread it could be very beneficial to do so. After running the script that spits out the stacked image (ignore it for now), change your Siril home folder to the "Process" folder that the stacking script creates. Then in the sequence tab search sequence, you should see several and 2 of these are what you want to do something about. The pp_lights sequence is your preprocessed (calibrated) lights, these are not registered yet so nothing in the plot tab to be seen. The r_pp_lights sequence is your registered lights sequence. Open this to gain access to the plot tab. Although i recommend you start with the pp_lights sequence first so that you can reject the bad stuff before registering them. I'll go over shortly how to do the registering/stacking process manually. The UI is a bit intimidating but in the end its quite simple. Starting with the pp_lights sequence opened, go to the registration tab and select the "Two-Pass Global Star Alignment (deep-sky)" method for registration. Leave everything else at default. This method is the best because it goes over all of your subs and then selects the best possible frame as the reference frame, unlike other methods that use the first sub of the sequence which is often not ideal (the script does this too). Siril will compute registration data at this point, but not apply it yet so that you can inspect the data before actually applying the star alignment. At this point, check the Plot tab. You can select from the bottom drop down menus what you want the graphs to represent. Go over the "background" graph at least and deselect the bad stuff by dragging a selection box over the graph and right clicking. You should also go over the FWHM and roundness graphs too so that you can deselect out of focus and trailed images. The wFWHM tab is inverted where top entries in the graph are bad and bottom entries are good, all other tabs are "normal". After going over the data you have to actually apply the registration before stacking. Go back to the Registration tab and find the "Apply existing Registration" option and choose that. At this point make sure the image selection is chosen as "Register selected images only". Then click Go register and Siril will star align only the subs you left in the Plot tab and reject the rest. After this point you can stack the registered subs in the Stacking tab. Below are parameters that i would recommend, which i believe are default or at least close to. The awfully named rejection method i have here is excellent for large datasets but it can take a while to process. If your PC is struggling you could choose Winsorized sigma clipping with 3.0 for both low and high sigma (usually ok). If you click the Create rejection maps option you will find a reject high and reject low image in the stacking folder after the process completes. Here you see all the satellite trails and other anomalies that the algorithm has chosen to get rid of. You dont have to select anything for the "image rejection" tab here, because you already rejected the bad images manually from the Plot tab.
  9. I would keep only the first image and others with comparable levels. Have a look at the different graphs siril draws in the Plot tab after registration and deselect the images that are significantly worse than the best of the night. Maybe choose a cut off point of twice the background signal compared to the darkest subs of the night and stack only those. You should still use a weighing method for stacking, such as weighed fwhm in Siril. What the graph will look like: This was my last image of the year in late April, you can see that i started imaging before darkness and it never truly settled to proper darkness and then started to rise again. Easy to select only the lights you want to work with here and continue with those. Then in the stacking tab you should probably select either wFWHM or #stars as the weighing method for stacking. Number of stars is a simple but effective method of determining the quality of an image because as the background brightens you lose the dimmest stars, or if guiding had an issue during that sub you also lose the dimmest stars and so on. The weighed FWHM method takes into consideration the number of stars and their size and so will give highest weights for images which have lots of tight round stars and the worst weights for images with a low number of oblong stars.
  10. @Rodd what are you willing to change in your workflow or setup to make the perfect image happen? Many different imagers have offered their thoughts and advice and so far none of them have come to the conclusion that your data is bad so something has to change between the chair and the keyboard so to speak. I think your data is good, its not perfect but so far have not seen perfect data and im not sure that avenue even exists. You mentioned that you dont want to do noise control measures and only stretch an image to where noise does not come visible. Well, to me this is the worst option with modern tools (especially noiseXterminator) and you are deliberately choosing to not get the best result. If you want to keep doing this then maybe aim for 300 hour integrations instead of 30? Or just apply some NR where applicable and you triple the amount of dynamic range you have to play with in the image to get the perfect result. This is a skill that has to be learned too, so its not like you just run the tool and the image improves. You need to know where to apply it by how much and at which point of the process. You keep moaning that other images look a way that your images do not, so what have YOU tried to do about it with your data? If you keep processing your images the same way and you are still not happy, why the surprise then? Then there is the problem of your kit, if you consider it a problem. You know your skies give somewhere in the range of 2.5'' to 3'' fwhm data in the average to average bad range (based on your lum stack here). So does the C11 have to stay? You could get rid of it and get a wider scope that is capable of showing all the same detail. Here you have more option than days in a week. 8'', 10'' newtonians either f/4 or f/5 with a quality corrector, 8'' SCT, 8'' RC, 11'' RASA, 150mm triplet refractor? All with significantly less focal length but still much better for your seeing conditions. Apologies, this comment is quite blunt. It just seems to me you're looking to complain but not do anything at all to improve while there are several avenues you could take to get closer to the image you have in mind.
  11. I captured data on M101 in April 10th, so well before. One of those blobs maybe?
  12. Not an SCT owner myself, so wouldnt know. Although i have seen others with the issue install an extra focuser on theirs and permanently lock the mirror so that might be an option.* I see you already have one, didnt read the comment properly.. But before spending who knows how much on who knows what you should test the stability of yours with flats in different scope orientations first.
  13. You could post a flat, a darkflat, a dark, and a lightframe in raw .fits format to be analyzed by others. I have seen at least half a dozen cases where light leaks were to blame but its not necessarily always the issue. With newtonians its the clear usual culprit but an SCT is more closed so maybe not this time. You could have mirror flop or focuser sag, or really anything at all in the light path moving between lights and flats and the result is bad calibration. This is testable though. Slew the scope to extremes on each side and take a flat. I mean one at the zenith, one at maximum east and north, one at south, one at west etc. Point is to get the scope in as many orientations as possible. Then divide one of these flats with another simply with pixelmath in PI (Flat1/Flat2) and see what you get. Thr resulting image from dividing any flat with any other flat from this test should be a featureless gray mess with no brightness difference to be seen even with stretching. If you get a gradient, you have mechanical issues and flats will simply not work until they are solved.
  14. IR pass filters work great on the Moon. Longer wavelenghts are less prone to poor seeing so you can often find better detail in IR even if shorter wavelenghts have a chance of being sharper due to limits in diffraction. So not completely useless. If your camera has an uninterpolated video shooting mode like the movie crop in a 550D then you can do some serious lunar work with the camera.
  15. Just for fun, here is my worst reject-high image from about 35 hours on M81/82 imaged between Dec 22- March 23.
  16. Almost perfect tools already exist for the job. The generalized studentized extreme deviate test method (whoever came up with the name needs to never name anything again) is excellent at outlier removal from large stacks. Rejects very little actual signal too. And these tools will only get better! So i fully agree, this is not the hill to die on in terms of issues with imaging.
  17. Deep sky stacker, Siril, Sequator are all tools made for the job with the difficulty in using going from the easiest sequator->deep sky stacked->siril but siril being the most powerful. They are also free.
  18. Forgot i had an image of the Leo triplet too! Will have to stack that again as i only have a pretty picture type process of it from long ago that im sure does not have the best possible background for faint object detection. Milliquas catalog finds a silly 180 quasars in the field of view apparently🤪.
  19. I think the different looking "scales" in our images is the difference in how the dynamic range is presented. I have crushed the shadows and midpoints pretty much together in order to show the faint parts better. That creates a very flat looking image where all the detail appears close to the same brightness as if it were a distant background galaxy with an even brightness, which is not always what you want (and a matter of preference anyway). I think this image of yours is really nice, an improvement over the initial for sure. You have very nice colours where it matters, and the faint parts are much better seen. How well you want to show them is a matter of taste, so i dont think there is a right answer. If you want to compare our images in a more raw state i am attaching the LRGB stack i used below. If there is a difference in the background of our linear images then it should be easier to see without fancy processing in the way. The tool in Siril is destructive to actual data if applied incorrectly, and even slightly when applied correctly. But it is fantastic for bruteforce gradient removal and in my opinion much better than DBE that i would describe as more careful with the data. RoddM101-lrgb-blurxt.fit Binned x3, then background extracted in Siril for the individual mono files, LRGB combined in PI, SPCC and BlurXT applied.
  20. Wouldn't call myself an expert, but processed the dataset anyway. Cant capture anything at the moment anyway so might as well practice. Some questions for the set, how bad is your light pollution? My initial guess from the data is that there is a fair bit of it, but its not the end of the world since the stacks still contain all the good stuff. But for almost 30 hours it does look not that deep so i am guessing light pollution is a major issue. Apologies, have to be the bearer of bad news but looks like calibration has not worked perfectly as some dust motes remain so at least flats have something to improve. These are not at all apparent though and only appear after hard stretching. Not the end of the world i think, but something to improve if you want to. I processed an L-RGB image out of the stacks. I used the l400 named file for luminance since the fwhm difference is not too big compared to the sharper one so i think not worth loosing data for that. Did not include the Ha file as im still learning how to do that properly and cant land on a "right" looking result yet, so dont want to muddy the waters with my poor attempts. But below is my attempt for now. Usually i process an image, delete the intermediate files and sit on it for at least a day while occasionally looking at it. I cant recall if i ever used the first day attempt as the final image, probably not since the first result is often more wrong than right. Often it is attempt number 5 or more that gets posted. No such thing as too much reprocessing! Focus on this image was to show all the faint arms if possible and it certainly is. And the dust motes i found below. Generally the image edges also took a dark turn while stretching, which may indicate imperfect flats, or imperfect background extraction (which often happens because of imperfect flats). Personally i would try to solve this issue, up to you to decide whether its worth taking action on. Then ill briefly go over what i did to the image. First, gave all the files a crop, binned x3 based on the fwhm values and my preferences (probably should have binned x2, more later). Then gave them a background extraction in Siril with the excellent tool in it. The background in these stacks is very challenging, and i think you might be loosing the faint stuff already at this stage if the background extraction part goes wrong. The situation is challenging because the image has almost no background to speak of, all except 2 corners of the image contain faint spiral arms of M101 and if background samplers are placed here the result is sure to be ruined. Did L-RGB combination in PixInsight after linear fitting the RGB files to the luminance one. Ran SPCC for colour calibration and BlurXT with automatic PSF and nonstellar set to 0.8. Seemed to have worked ok without any obvious BlurXT caused issues. Then saved the file and opened it in Siril again, because i like the user interface and stretching tools better than PI. Gave the file an Asinh stretch with a power of 1000 and some blackpoint adjustment to bring down the levels. Finished this stage with a histogram transformation for an "almost finished" image that i exported to Photoshop. At this point the file looked like this, an almost finished image and a sort of template for the dozen attempts that follow: The Photoshop phase of processing sees almost all of the work. Fiddling with various tools and sliders to no end until the image either fails in a death by a thousand cuts fashion or becomes what i initially had in mind one brick at a time. I used a very simple background lift with the shadows and highlights tool to uncover some of the fainter spiral arms while still controlling the core. Love that tool, couldn't be simpler to use and works great for things like this and i dont really hear anyone mention that online. Sharpening with smart sharpen applied only on high SNR regions with either the color range tool, or lasso+feather+copypaste as new layer and adjust to taste. Took many wrong turns on the way and i think i went too far with the sharpening as is almost always the case. This is usually something the finalversion_copy_adjust_mk2.4_usethis_jpeg file fixes a dozen attempts down the line if i figure how to not make the same mistakes (doesn't always happen). On the binning, i found myself looking at the image almost exclusively past 100% zoom while processing and applying sharpening. Usually this means binning went too far and i could have used the image at a higher resolution and i think here is where i made the first mistake. Some comments on your process; It is left very dark in the background, which makes it difficult or impossible to observe the faintest spiral parts because they are just a few levels more than the background. Maybe you could ease on the levels and leave them a bit higher? I think i recall you mentioning the background being difficult on many images, which i agree this one was challenging. So here i will recommend using the background extractor in Siril (or GraXpert, a standalone app of the same tool basically) with manually placed samplers. Here you have to be careful not to place a single sampler on a star or on a faint spiral arm. Will likely take a few tries as you cant see the arms before the gradients are removed. PS, as i write this comment i am already disliking the processed image i attached here. So i understand your frustration! Processing is an uphill battle for sure.
  21. Very nice Plato, the rough terrain to the right of the crater is always visually striking. On the crater and edge artifacts, there is a perhaps a little bit unethical but effective way to hide them in Photoshop using the blur tool. Just paint the bright crater edges or other places where sharpening has uncovered artifacts while using a very high zoom level and the tool width at some small value like 5px. Then viewing the image back at 100% there is little evidence that this took place if you did not overdo it. I am certainly not above using the tool anyway and have found it useful if a handful of craters in an image have suffered.
  22. Unticking automatic PSF from the nonstellar adjustments portion and manually setting the value to significantly smaller than the actual measured PSF diameter gets rid of the worms while still sharpening the image. Testing with my own M33 i find that setting the PSF diameter to somewhere between 40-60% of the actual measured value gives a result which is tastefully sharpened (to my eyes anyway). The issue could be with how BlurXT determines which type of deconvolution it applies, since this should be mostly stellar adjustments based but the worms come from the nonstellar portion of the tool.
  23. I discovered the great fun that is PixInsight annotation with the milliquas - million quasars catalogue and set out to find the most distant object i have happened to capture. To my great surprise some are over 12 billion light years distant. Quasars around M106 with a redshift greater than 3: Closer look below, because these are just a handful of pixels each: 2 quasars with a redshift of greater than 3 were also found around NGC4725: Closer look again: The 2 z=3.7 quasars around M106 are the most distant objects i have yet to see in my images. Honestly never in a million years would have believed its possible to capture something like this with relatively modest equipment and not that long integration times. 12 billion years. What does that even mean? The brain department goes blank thinking about these numbers, i mean its more than twice the age of the solar system and its 2 billion years more than what the suns total lifespan will be. Even putting it to words like this it still doesn't fully make sense that light from so far away has just happened to hit a 20cm mirror and reflected onto a camera chip that just happened to be exposing at that time. Now i wonder what is the most distant object that i could reasonably be able to capture. Most of these are magnitude 20 something or 21 something, so i think i should be able to go a little bit deeper with a realistic limiting magnitude probably being in the 23-24 mark based on my recent 35h image of M81/82 which does show some pixels that suggest targets at those magnitudes could be picked up. There is a z= 4.7 quasar in the field of view of a coma cluster image i took last year, but this one is magnitude 24-25 and not a single pixel is seen in the area so i think a little bit too dim for the system. Thanks for looking, i encourage others to go on the quasar hunt especially now that nights are either short or gone completely for most imagers here. -Oskari
  24. If daytime focusing is still difficult, maybe check that the prism is the right way in the OAG? When i first installed my OAG i had my prism like in the left example above which made more sense to me but actually the prism needs to be oriented like in the right example.
  25. Yep, this one is in my opinion an improvement to the previous ones. Perhaps a little cold but this is probably just a matter of taste. Though here in this image we can see what Olly already mentioned before about the Xterminator being not so plug and play for M33. I think these are BlurXT artifacts where the individual stars in the spiral arms are connected into a squiggly wormy lattice type thing where they look like a single mass connected by some bridge: Once you notice it i think it gets hard to unsee. I dont think i ended up using BlurXT for my previous M33 last autumn even when i reprocessed it a few months later (when BlurXT released) although processing is an ever continuing uphill battle so maybe today i would do it differently if i have learned anything since then. Looks like reprocessing my last M33 is on the menu next!
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.