Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

ONIKKINEN

Members
  • Posts

    2,385
  • Joined

  • Last visited

  • Days Won

    5

Everything posted by ONIKKINEN

  1. @vlaiv i have had this one question for a while and this thread seems to be the place for it. How does the FWHM / 1.6 rule take into account different monitor resolutions and observer preferences? Surely there is a subjective portion to this theory as well, since there is no such thing as a typical monitor or typical observer and both of those things will greatly affect how the image is seen and appreciated. I ask because there is a huge difference between looking at my images on my main monitor, which is a 27'' 1440p monitor at roughly 1.5 arm lengths distance and then my mobile smart devices (Phone and tablet, both with very high PPI displays). Personally i am finding it very difficult to process an image at the /1.6 scale, and further more find it difficult to actually appreciably see the detail in an image processed this way, especially if using the mobile devices. I find a FWHM / maybe between 1.8 and 2.2 or even up to something like 2.4 acceptable. Example images of what im talking about next. Below image is represented at roughly FWHM/2.4 if i recall correctly. Data was really not good, but still made an appreciable image. Personally i am having trouble seeing all the detail if the image is made any smaller than this size - regardless of if it could be downsampled a bit and then brought back up to full resolution with no loss in detail. Here is a more apples to apples comparison, an image of M51 where i did fresh processes of the same dataset at different resolutions. The stars have something weird going on, not at all sure what. The night was a dew nightmare, so i am guessing some purely cosmetic issue arose from that (data was between 2.0-2.4'' fhwm despite the hideous stars). First try, close to the 1.6 rule: Another version at a higher resolution, probably closer to FWHM/2: To my eyes the second one looks much sharper. For whatever reason i had an easier time working on the higher resolution image and could sharpen it much further, why do you think that is? Is it that with properly sampled data one has very little leeway in how exactly they apply the sharpening, and so can easily do it wrong?
  2. The file you linked does not match in its fits header to the fits header you pasted. The attached file is a stacked image and consists of 5 exposures, yet the fits header of the pasted text consists of 11? Do you have many files like this or just the one? If many, you can still stack them even if they are already stacked, its just not ideal (ideally you would have the individual exposures, but not sure you can get them out of the seestar).
  3. Not too familiar with the seestar (have been just occasionally following the discussion) but am i right in assuming that these are all light frames? As in there are no calibration frames such as darks or flats. If so, you cant actually use most of the scripts in Siril because they assume you have calibration frames, but not to worry it is actually very simple to stack without them since we dont have to worry about calibration. I will briefly go over the simplest way to stack them to an image below. First set your home folder for Siril to be somewhere you want the temporary files and end result to be, using the button in the image below. So maybe you should create a folder named "siril processing" on a drive that has plenty of empty disk space and choose that. Then import the .FITS files for siril to use next in the "Conversion" tab shown below. Either click the + button and search for the files, or select all of them and drag and drop them to the empty field. You must also give this sequence a name and tick the Debayer button if we are working with colour images (assuming so). Then at this point we have to register the images (star alignment). Go to the "Registration" tab pictured below and use the settings shown in the screen. The most important parts are highlighted. There is another, more robust way to do registration using the two-pass alignment but maybe dont worry about that now. To do that you would select the two-pass method and after that completes you would select the "apply existing registration" to run it again (needs to be done in that order). This next step is optional, but might as well have a look at this point. The "Plot" tab contains an analysis of the input frames. Here you could de-select the worst images if you want to. If there are massive vertical spikes that are sticking out like a sore thumb, get rid of them here. Then finally stacking itself. I recommend you use the settings i have below. After all this you have your stacked image. DeepSkyStacker is much simpler, but it doesn't work with every dataset because its quite picky about quality.
  4. And I'd be inclined to agree, although not exactly by how you said you do it. It will be fairly obvious if the image is nice or not just by looking at the stacked image in autostretch screen transfer mode in Siril. Which is pretty close to what you meant with the blackpoint/gamma thing actually since it does just a simple stretch and no other behind the scenes trickery. Sharpening and all the AI voodoo is just icing on the cake, but if the cake is made of clay, obviously the AI tricks wont turn it into sacher cake.
  5. Skywatcher EQM35-PRO, for imaging use with a VX8 which from the golden throne of hindsight was obviously never going to work (and it didn't!). Another notable mention would be the Celestron 84wh LiFePo battery pack which had some of its plastic bits disintegrate after the first night and generally would not provide reliable power to anything. That one advertises 3A output, but it shut itself off frequently when powering the aforementioned EQM35. The worst part is that it has some not-so-smart circuit that automatically decides whether power is output or not, and it has a current limit on the low end as well. As a result it was not useful for even powering an LED flat panel because it decided to cut power after 5s due to low load. Didn't use it for several months and in the end it simply broke itself. Now i can no longer recharge it because presumably the same automatic circuit has soiled the bed, this time for good, and no longer accepts a charge.
  6. Do i see doubled stars on the left side or am i imagining it? Nice picture either way.
  7. Espoo is basically Helsinki suburbs, on the southern coast. Anyway for the ADC, you may want to read this: https://skyinspector.co.uk/atm-dispersion-corrector-adc/ And this graph from that page in particular is certainly a nice TLDR: And we can see here that Saturn will be bad at any altitude for this year and the next few years. Jupiter is ok, but could be better. Using an ADC will sharpen the individual colour channels themselves, which does more than just align the colour channels so it would certainly sharpen the results considerably. Most of the planetary imaging heavy hitters use one for sure. Focus has to be more than perfect, cant stress that enough. This is probably where you can improve the most if you figure out a way to increase the turning radius of your focuser to create a DIY reducer.
  8. First of all, your Jupiters here are above average for an 8'' scope, in fact i think they are pretty good. I would be content with those with my 8 inch newtonian. The Saturn images suffer from Saturn's low current elevation for us here in the north, so the issue is mostly out of your hands. We can also see that there is clear colour separation in them, which is due to the low elevation. You could get an ADC to try and fight this, but the low elevation will still be a major issue. I see from your location that you are fairly up north if the Scottish border was where you took these images, and i also see that the video you linked is taken from sowewhere in the US (not sure where, channel just says US as location, but certainly further south than Scotland). Saturn at the moment is very low in the sky for us northern imagers, and most of the images will come out looking like yours here (but i still think yours are good!). Jupiter is a bit better positioned, but still its not ideal with how low in the sky that is. The video example is most likely taken when Jupiter was a good 10-20 or more degrees higher in the sky, which is a huge deal. Seeing plays a major role the further north one is because there is simply more atmosphere in the way to ruin the sharpness, so you need to get very lucky with seeing to get the same results as someone who images from a southern location. Realistically that means that very good sharpness is quite rare and not something you should expect more than a few times a year( if that, depends on location though). I have certainly never seen quality like in the video from my location of 60N with my own scope, even when seeing appeared to be very good. There are other things you could do other than try and get lucky with seeing though. First you should not use the 3x barlow since that goes way too far in the oversampled territory, and in fact so does your 2x barlow. Ideal sampling for planetary imaging can be approximated to roughly: ideal f_ratio = pixel size in microns x4 which for you would be f/11.6. Not everyone (probably most dont) follows or cares about this "guideline" but it gives you an idea of where to head in terms of the barlow needed. If i were you i would use a weak barlow placed very close to the sensor to get a lower magnification (the shorter the distance between the barlow and sensor the lesser the magnification, except with telecentric barlows) By the way using Autostakkert!3 for stacking is very important here, since it uses a special type of debayering method which is able to recover the detail lost to the colour filter array and so does not suffer any sharpness loss compared to mono (at the cost of reduced SNR, but worth it). Older planetary stackers do not use this so the pixel size x4 for colour cameras is not quite right. Anyway, AS!3 is the best there is so should be used either way. For the focus thing you might want to get a reduced focuser, like a crayford with a 10:1 reduction gear or similar. I have seen people also do a MacGyver version of this like in this Dylan O'Donnell video: https://www.youtube.com/watch?app=desktop&v=MNShJXKiRDE Personally i focus with the same gain as the intended imaging gain. I just rack focus in and out repeatedly to see where the sharpest spot is, and try to leave it there. Its not too difficult to do with a 10:1 reduced crayford, but it still takes some time. If seeing is bad then it takes quite long because the focus point shifts all over the place (but in these cases the end result is typically not very good). In the best possible focus i am typically seeing one or more of Jupiter's moons blink in and out of existence with the seeing, and outside this focus they are seen less often or not at all. Oh, and i cant usually tell from the quality of the recording if the end image will turn out good or not. I have had only one night where the recording looked good too, and that turned out to be the best image i have gotten with my scope, but these nights are probably less than 5% of the nights im out with the scope so not something i am holding my breath on.
  9. These could be easy to remove with simply lassoing them out with content aware fill in Photoshop. I say could, because it depends on whether starXterminator (or Starnet) dumps the spike to the starless layer or the star layer. If in the starless layer, should be easy because it will be so easy to lasso out. If in the star layer then its a lot more work to dodge the stars.
  10. Those rings are faint enough that i wonder if they are easily visible once you add the stars back in? If the goal is to have stars in the final result that is. Some clever use of the clone stamp tool just to break the shape of the circle (not completely try and remove) would probably also hide it in a way that its less recognizable as an artifact. *Another fix is to paint them out with the lasso tool and content aware fill in Photoshop. Seems to have worked well for the JPEG above.
  11. Was not aware that ASTAP had this option. Have to try it now for sure!
  12. Probably not the answer you were hoping for, but paid software make short work of mosaicing (and are the least headache inducing). Astropixelprocessor works well for most simple mosaics, and the Photometric mosaic script in PixInsight will surely mosaic any panels together if they can be. Astropixelprocessor can be rented for i believe 60€/year or something like that last i checked, and they have a free trial too. PixInsight costs 250€+tax, but they also offer a free trial on request. The tool in PixInsight is more complicated to run and requires you to read the instructions thoroughly. Astropixelprocessor mosaicing is a lot easier in that you first just stack the panels individually and then re-run integration with the panels in mosaicing mode.
  13. Thank you! Wasn't sure how to feel about the closeups so i am glad you liked them. I think i suffer a bit too much from the pixel peeper syndrome.
  14. Taken with an 8'' newtonian + APM coma correcting barlow and a ZWO ASI678MC. Average seeing, but some possible image train issues with the setup that could have thrown a wrench in the works (not enough focus travel). Still not the worst Jupiter i have taken. -Oskari
  15. All taken with my 8'' carbon newtonian and an ASI678MC. The full disk image was taken at prime focus through a paracorr at 1018mm focal length, the barlowed ones through an APM 2.7x coma correcting barlow at a reduced, roughly 2.5x barlow factor. 3 panel mosaic, 3000 frames each with 20% best stacked: Archimedes, Aristillus, Autoclys and surroundings: Ptolemaeus, Alphonsus, Arzachel and surroundings: The barlowed images have a slight comatic look to them, especially towards the top left. I had rearranged some focuser accessories and now could no longer reach focus with the imaging train fully inserted, so the barlow train was slightly hanging out of the focuser which is likely the culprit for that. Something to fix for the next outing. The mosaic came out well, but this imaging scale is very forgiving so it often does. Really enjoying the 678MC + 1018mm focal length combo for Lunar with its relative ease of producing a full disk image. Still 80 gigabytes though... Seeing was average at best with very thin high cloud above which messed with the levels occasionally. Still could have been a lot worse so no major complaints. -Oskari
  16. The rest of your images are probably not stackable in this case, at least from DSS point of view. Check the list of images after registration and see if there are ones that have a very low number of stars detected compared to the rest of them? If there are, then these will probably not stack no matter what. DSS is quite picky with images, and wont stack bad subs but you could try some other stacking software and see if they can be salvaged. Siril is a bit more complicated to use, but you can force it to stack pretty bad subs too if you change the star detection settings. But i should point out that the subs probably shouldn't be stacked if DSS rejects them because they will end up hurting the end result if mixed in with good subs.
  17. Detail is better on the rework, but i have to say i overall prefer the original with its pleasing colour palette. The rework looks a bit too blue, and too saturated in general with specifically the Ha regions harshly contrasting against the surroundings. Still a damn good M31 i should say.
  18. Time to update the IT security level of the observatory then. The attacker has to only get lucky once, but the IT security system needs to get lucky every time in order to stay secure. Why someone would hack an observatory is beyond me though, unless it was done just because they could. I suppose it could be the typical extortion racket where the hacker hopes the victim just wants to pay to make the problem go away instead of potentially losing data which could be valuable since you cant always just go and re-take the data in a timely fashion.
  19. Here is how i do it: First i have stretched the image in Siril to a point where i think the stars are close to where i want them to be, but you could do it in any software of course (i do it in Siril because i do preprocessing there anyway). Open the image in Photoshop and duplicate it twice to create 2 extra layers Run StarXterminator on the top layer Duplicate the starless layer once the process has completed and then hide the top starless layer. Put the other, unhidden, starless layer into blend mode "subtract" and merge down. Now i have a starless layer, a stars only layer, and the original image as background for comparison purposes mostly. Once i have processed both layers the way i like i combine the starless and stars only layer using the blend mode "linear dodge(add)" and merge them. In my opinion simply adding the star layer back instead of screening the starless layer in front of it yields a more natural looking end result. With the blend mode set to screen you can end up with transparent looking stars, or stars that are hidden behind nebulosity. Entirely a matter of taste of course, and i think i may have the minority opinion here with preferring linear dodge over screening. I have created a GIF of the difference between screening nebulosity in front of stars, and simply adding them: The difference is barely noticeable outside nebulosity, but very easy to see for stars that are in front of nebulosity and appear to be muted with the blend mode set to screen.
  20. Took the words right out of my mouth. Well, i am biased since i have an 8'' newtonian, but i think this is as close to a jack of all trades scope as can be. Planetary and lunar is easy with the right corrector/camera combo, as is high-ish resolution seeing limited DSO imaging with a good quality coma corrector. Wider field is plausible with something like the Starizona 0.75x corrector, although dont have myself so maybe shouldn't advocate for it. But mosaics at f/5 are still reasonably fast, although not competing against the RASA. I think a newtonian still has the best chance of being a scope that can fit all imaging purposes even somewhat.
  21. Siril will happily stack images with any (reasonable) difference in pixel scale to a single stacked image. I have stacked 0.76''/pixel, 0.91''/pixel and 1''/pixel images to a single file and it all worked like it should. Its not always a good idea because there can be signal to noise ratio issues if the datasets are too far apart in quality but your 2 cameras are reasonably similar so I'd say it will work out fine. There is one thing to keep in mind, and that is the end resolution you want to go for. Whatever image you import as the first file in the input sequence to Siril will be the reference image, and all images will we warped and transformed to match that. So if you want the stack to have the pixel scale of the 533, input a 533 sub first or vice versa for the 585 data.
  22. The backyard universe spider assembly is that CNC machined solid piece right? I dont see how that could do this since it pretty much guarantees a central (to the tube) secondary. How high is the target? Cant figure out from the example image what it is. Anyhwere within i would say 10-15 degrees of Polaris will be tricky if there is cone error and personallh i think its too much trouble to shoot there. My setup wont really even plate solve and go-to properly too close to the pole. Wish i could offer better advice for the last part, but i have had repeated weird issues from night #1 onwards more than 3 years ago 😐. I have accepted that this is how it goes, for better or for worse.
  23. If you have a lot of cone error, you get this. Also, the closer you are pointing to the celestial pole the stronger the effect becomes. My setup has enough cone error to cause a roughly 2 degree rotation change pre/post meridian when imaging M81 for example, but the difference in diffraction spikes is not really at all noticeable in that case so you probably have some serious amount. I think severe polar alignment issues will also cause "effective cone error" like this, but it would have to be pretty bad and you would notice it in guiding performance. Since its a newtonian you could be out of collimation, which will introduce cone error as the light path is not exactly central through the mirrors then. Has this ever occured before? If not, my bet is something was knocked out of alignment. Or something too simple to think of, like the scope not sitting well in the mount clamp and something like that.
  24. There was an update a few days ago and something may have been broken by it. A number of people have reported issues with tools related to mosaicing for example. But it will be slow anyway, at least its way too slow on my PC so i dont bother. I have had the Astrometric Solution box ticked off from the Lights tab in WBPP when i stack with it so it doesn't stall on that process forever.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.