Jump to content

ONIKKINEN

Members
  • Posts

    2,530
  • Joined

  • Last visited

  • Days Won

    8

Everything posted by ONIKKINEN

  1. I think the different looking "scales" in our images is the difference in how the dynamic range is presented. I have crushed the shadows and midpoints pretty much together in order to show the faint parts better. That creates a very flat looking image where all the detail appears close to the same brightness as if it were a distant background galaxy with an even brightness, which is not always what you want (and a matter of preference anyway). I think this image of yours is really nice, an improvement over the initial for sure. You have very nice colours where it matters, and the faint parts are much better seen. How well you want to show them is a matter of taste, so i dont think there is a right answer. If you want to compare our images in a more raw state i am attaching the LRGB stack i used below. If there is a difference in the background of our linear images then it should be easier to see without fancy processing in the way. The tool in Siril is destructive to actual data if applied incorrectly, and even slightly when applied correctly. But it is fantastic for bruteforce gradient removal and in my opinion much better than DBE that i would describe as more careful with the data. RoddM101-lrgb-blurxt.fit Binned x3, then background extracted in Siril for the individual mono files, LRGB combined in PI, SPCC and BlurXT applied.
  2. Wouldn't call myself an expert, but processed the dataset anyway. Cant capture anything at the moment anyway so might as well practice. Some questions for the set, how bad is your light pollution? My initial guess from the data is that there is a fair bit of it, but its not the end of the world since the stacks still contain all the good stuff. But for almost 30 hours it does look not that deep so i am guessing light pollution is a major issue. Apologies, have to be the bearer of bad news but looks like calibration has not worked perfectly as some dust motes remain so at least flats have something to improve. These are not at all apparent though and only appear after hard stretching. Not the end of the world i think, but something to improve if you want to. I processed an L-RGB image out of the stacks. I used the l400 named file for luminance since the fwhm difference is not too big compared to the sharper one so i think not worth loosing data for that. Did not include the Ha file as im still learning how to do that properly and cant land on a "right" looking result yet, so dont want to muddy the waters with my poor attempts. But below is my attempt for now. Usually i process an image, delete the intermediate files and sit on it for at least a day while occasionally looking at it. I cant recall if i ever used the first day attempt as the final image, probably not since the first result is often more wrong than right. Often it is attempt number 5 or more that gets posted. No such thing as too much reprocessing! Focus on this image was to show all the faint arms if possible and it certainly is. And the dust motes i found below. Generally the image edges also took a dark turn while stretching, which may indicate imperfect flats, or imperfect background extraction (which often happens because of imperfect flats). Personally i would try to solve this issue, up to you to decide whether its worth taking action on. Then ill briefly go over what i did to the image. First, gave all the files a crop, binned x3 based on the fwhm values and my preferences (probably should have binned x2, more later). Then gave them a background extraction in Siril with the excellent tool in it. The background in these stacks is very challenging, and i think you might be loosing the faint stuff already at this stage if the background extraction part goes wrong. The situation is challenging because the image has almost no background to speak of, all except 2 corners of the image contain faint spiral arms of M101 and if background samplers are placed here the result is sure to be ruined. Did L-RGB combination in PixInsight after linear fitting the RGB files to the luminance one. Ran SPCC for colour calibration and BlurXT with automatic PSF and nonstellar set to 0.8. Seemed to have worked ok without any obvious BlurXT caused issues. Then saved the file and opened it in Siril again, because i like the user interface and stretching tools better than PI. Gave the file an Asinh stretch with a power of 1000 and some blackpoint adjustment to bring down the levels. Finished this stage with a histogram transformation for an "almost finished" image that i exported to Photoshop. At this point the file looked like this, an almost finished image and a sort of template for the dozen attempts that follow: The Photoshop phase of processing sees almost all of the work. Fiddling with various tools and sliders to no end until the image either fails in a death by a thousand cuts fashion or becomes what i initially had in mind one brick at a time. I used a very simple background lift with the shadows and highlights tool to uncover some of the fainter spiral arms while still controlling the core. Love that tool, couldn't be simpler to use and works great for things like this and i dont really hear anyone mention that online. Sharpening with smart sharpen applied only on high SNR regions with either the color range tool, or lasso+feather+copypaste as new layer and adjust to taste. Took many wrong turns on the way and i think i went too far with the sharpening as is almost always the case. This is usually something the finalversion_copy_adjust_mk2.4_usethis_jpeg file fixes a dozen attempts down the line if i figure how to not make the same mistakes (doesn't always happen). On the binning, i found myself looking at the image almost exclusively past 100% zoom while processing and applying sharpening. Usually this means binning went too far and i could have used the image at a higher resolution and i think here is where i made the first mistake. Some comments on your process; It is left very dark in the background, which makes it difficult or impossible to observe the faintest spiral parts because they are just a few levels more than the background. Maybe you could ease on the levels and leave them a bit higher? I think i recall you mentioning the background being difficult on many images, which i agree this one was challenging. So here i will recommend using the background extractor in Siril (or GraXpert, a standalone app of the same tool basically) with manually placed samplers. Here you have to be careful not to place a single sampler on a star or on a faint spiral arm. Will likely take a few tries as you cant see the arms before the gradients are removed. PS, as i write this comment i am already disliking the processed image i attached here. So i understand your frustration! Processing is an uphill battle for sure.
  3. Very nice Plato, the rough terrain to the right of the crater is always visually striking. On the crater and edge artifacts, there is a perhaps a little bit unethical but effective way to hide them in Photoshop using the blur tool. Just paint the bright crater edges or other places where sharpening has uncovered artifacts while using a very high zoom level and the tool width at some small value like 5px. Then viewing the image back at 100% there is little evidence that this took place if you did not overdo it. I am certainly not above using the tool anyway and have found it useful if a handful of craters in an image have suffered.
  4. Unticking automatic PSF from the nonstellar adjustments portion and manually setting the value to significantly smaller than the actual measured PSF diameter gets rid of the worms while still sharpening the image. Testing with my own M33 i find that setting the PSF diameter to somewhere between 40-60% of the actual measured value gives a result which is tastefully sharpened (to my eyes anyway). The issue could be with how BlurXT determines which type of deconvolution it applies, since this should be mostly stellar adjustments based but the worms come from the nonstellar portion of the tool.
  5. I discovered the great fun that is PixInsight annotation with the milliquas - million quasars catalogue and set out to find the most distant object i have happened to capture. To my great surprise some are over 12 billion light years distant. Quasars around M106 with a redshift greater than 3: Closer look below, because these are just a handful of pixels each: 2 quasars with a redshift of greater than 3 were also found around NGC4725: Closer look again: The 2 z=3.7 quasars around M106 are the most distant objects i have yet to see in my images. Honestly never in a million years would have believed its possible to capture something like this with relatively modest equipment and not that long integration times. 12 billion years. What does that even mean? The brain department goes blank thinking about these numbers, i mean its more than twice the age of the solar system and its 2 billion years more than what the suns total lifespan will be. Even putting it to words like this it still doesn't fully make sense that light from so far away has just happened to hit a 20cm mirror and reflected onto a camera chip that just happened to be exposing at that time. Now i wonder what is the most distant object that i could reasonably be able to capture. Most of these are magnitude 20 something or 21 something, so i think i should be able to go a little bit deeper with a realistic limiting magnitude probably being in the 23-24 mark based on my recent 35h image of M81/82 which does show some pixels that suggest targets at those magnitudes could be picked up. There is a z= 4.7 quasar in the field of view of a coma cluster image i took last year, but this one is magnitude 24-25 and not a single pixel is seen in the area so i think a little bit too dim for the system. Thanks for looking, i encourage others to go on the quasar hunt especially now that nights are either short or gone completely for most imagers here. -Oskari
  6. If daytime focusing is still difficult, maybe check that the prism is the right way in the OAG? When i first installed my OAG i had my prism like in the left example above which made more sense to me but actually the prism needs to be oriented like in the right example.
  7. Yep, this one is in my opinion an improvement to the previous ones. Perhaps a little cold but this is probably just a matter of taste. Though here in this image we can see what Olly already mentioned before about the Xterminator being not so plug and play for M33. I think these are BlurXT artifacts where the individual stars in the spiral arms are connected into a squiggly wormy lattice type thing where they look like a single mass connected by some bridge: Once you notice it i think it gets hard to unsee. I dont think i ended up using BlurXT for my previous M33 last autumn even when i reprocessed it a few months later (when BlurXT released) although processing is an ever continuing uphill battle so maybe today i would do it differently if i have learned anything since then. Looks like reprocessing my last M33 is on the menu next!
  8. In the Post-Calibration tab of WBPP, go to the top right selection box "Exposure tolerance" and set that number to be higher than the difference you have in your subs. For example if you set it to 30s images within 30s of each other will be stacked into a single master light so in your case 30 and 60s would be stacked together. And yes, you can keep adding subs to year old projects. As long as you calibrate everything with their own calibration frames it really doesn't matter which year the data was taken on.
  9. Reading your situation it still sounds to me like the overwhelmingly easiest and cheapest option would still be to just set up guiding. Any 30mm guide scope, mini-pc ziptied/velcroed somewhere on the setup (need not be fancy) and you are set. Any guiding will be better than no guiding at all, no matter what you might think about the backlash of your AVX. So without guiding you are rejecting 25% of all subs and its still acceptable? Sounds way too much for normal operation to me. Scrapping 25% each night would be a disaster in my books. Like you said you have 5'' error in every sub and you find that acceptable, but guiding results in up to 1.2'' error so to me it seems obvious that you should just keep guiding as its 4x better? Who cares about backlash if the end result is 1.2'' RMS at a resolution of 4.7''/px, that is excellent guiding for the setup. On the last part, if there are mounts that can do better than yours (yours sounds like a really good copy of a mount). There are mounts with encoders, but these are very pricey: https://www.teleskop-express.de/shop/product_info.php/info/p12834_iOptron-GEM28EC-with-iPolar--Case-and-LiteRoc-Tripod.html Again, by far the simplest and most cost effective option is to just guide.
  10. See if yours is silicone glued to the mirror cell? Mine was from 3 points on the side connecting to the clips and on the bottom where the little nylon pegs hold the mirror in the sell. Youll need to get rid of the silicone somehow if thats the case, with box cutter or similar. Careful not to scratch the mirror of course! Also, if that seems like too much trouble you dont have to remove the mirror from the cell to clean it.
  11. Honestly, Siril goes really far on its own. My 2 cents would be to stick with it + Gimp for now. Or maybe consider subscribing to Photoshop. At 12€/month its an absolute steal of a bargain and will take you more than 2 years to become "more expensive" than PixInsight which i think you might not be able to use efficiently at the moment as its quite challenging to use although here i inject my own bias in to the mix as i learned to use other tools before PI. Photoshop has many included plugins that are easy to use and can work well with astrophotos (like Camera raw, Smart sharpen, Unsharp mask, noise reduction, lightroom etc). Siril has an amazing background extraction tool, simple and effective Photometric color calibration tool and easy to use stretching tools with a simple interface where you actually see what you are doing while you are doing it (as opposed to PixInsight). The newest version can also do non-blind deconvolution which will sharpen your images if there is anything to sharpen while attempting to not boost noise. You can also link Starnet++V2 to it and run it on linear images to create a starless and a stars only image to play with further in Gimp or Photoshop should you choose to do that. I would say get to the bottom of the basic workflow: which includes well calibrated data, well stacked data with obvious outliers removed using the "plot" tab in Siril, background extraction, colour calibration and stretch. All are very easy to do in Siril and result in an "almost done" image!
  12. More reprocessing, this time only M82: Slightly different looking maybe? Stared at the image far too long to tell the difference any more.
  13. Looks professional, could imagine an astro gear manufacturer stamp on the side and a 50€+ pricetag on it. But Finnish winter eats plastic parts, especially those that are under any tension so would probably not dare use it myself. I am guessing you dont have to worry about something like that so looks good to go.
  14. You definitely have some IFN! The lower part will look like it connects to M81 as an extra spiral arm if given more time. But that might require a really unhealthy dedication to one image from Bortle 8. But still, IFN is IFN and you absolutely do have it here which is an accomplishment on its own from inner city conditions.
  15. Truly mind bending numbers at play here, love to see it. Now i have fallen down the Quasar search rabbit hole to see just how long did some of those photons spend on the way to my camera sensor.
  16. Using NINA myself. Everything works as intended (some user error not included) and have been perfectly satisfied with it for all my needs. I use an old version from 2021 but since everything works just fine i am not going to update it.
  17. Really sharp! Not sure i would call this oversampled no matter what the theory says. My eyes seem to like it very much!
  18. You are probably right with the IFN being too prominent in the background. I am desperately trying to make it observable, but probably some of it should not be so (at least so strongly).
  19. Reprocessed the image i posted in March from scratch: I much prefer this one to the last one posted here: Let me know what you think! -Oskari
  20. I have been feeling the need for speed lately which has manifested itself in a desire to get a 12'' f/3 carbon newtonian. Would cost an arm but maybe not all of a leg to make happen. Also this would be just about the largest and fastest possible scope that would still be viable for mobile use without an observatory. Not that i have an arm or half a leg to spare for the costs at the moment but hey we are dreaming here 😉. Or a RASA36, or an 8'' refractor, Takahashi CCA-250? I'll stop here or i will list every large aperture imaging scope ever made.
  21. The second image (one taken on 23/4?) looks quite a bit worse than the first one. Not obvious from the stats, since offset is 256 we have around 1000 ADU in the first night which gives us 250 electrons with gain 100 that has an e-/ADU rate of approx 0.25. The second night is then somewhere around 1200 ADU/ 300e- so not an earthshaking difference, but both are pretty high already. Im just looking at these in Siril autostretch and there is a noticeable difference when i blink between them, the first night has an obviously brighter M51 and stronger stars than the second one and indeed by measuring the number of detected stars i see a drop off a cliff from 640 stars to 400 in the second night sub. Fewer stars, darker looking target, a little bit more signal in the same sub length and filter. Sounds like terrible transparency for the second night. It can really be that bad if there is some thin high cloud, excessive humidity, aerosols like smoke or pollen (we have a pollen apocalypse here at the moment for example). All of that will reduce the actual signal you want but still make it seem like you are getting usable signal since the frames have a familiar looking ADU count on them just from local lighting conditions. On flats; you can drop darkflats if you want to and use bias or even the dark master as a darkflat. You can also subtract offset by some other method, i know APP does some kind of pedestal thing for flats and if i recall correctly WBPP in PI also had an option like this. Its important that the offset gets removed, just not very important how with how little dark signal there will be in flats. In principle its the same thing for your lights, you could drop darks and just subtract offset. With 180s lights you are getting less than a tenth of an electron per pixel on average if you cool down to -10 so up to you to decide if that's worth taking darks over.
  22. Thank you! The usual; 8'' f/4.4 newtonian + TeleVue Paracorr so effectively f/5
  23. I did mention in my past M106 post that it would be the last image unless i forgot something from the past season. Which i did, this one: 132x 120s presented here at around 1.1''/px. I think its worth a full screen click for the galaxy core alone Captured early April in decent seeing and at the time i didn't like how i initially processed it so i just left it. Still have an extra 4 hours on this with the Antlia Triband filter which might end up as an Ha layer for this data one day. Still learning how to do that properly so did not make this cut. This process is sitting much better with me, but may have overdone sharpening on the core. Comments and so on very much welcome as always. -Oskari
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.