Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

The Lazy Astronomer

Members
  • Posts

    952
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by The Lazy Astronomer

  1. Be aware the flattener is an absolute necessity for all but the smallest of sensors (or the very centre of a larger sensor) due to strong field curvature with this scope.
  2. You could add a barlow to a really small scope, but it dims the "image" (for lack of a better word) at the eyepiece. So higher f ratio = dimmer image. Extreme hypothetical situation: use barlows to make a 50mm f40 scope (2000mm focal length) vs a 400mm f5 scope (also 2000mm focal length) - I know which one I'd rather look through!
  3. Mexico! I'll be following all your content with interest*, as I keep thinking I want a mid-ish sized Newtonian for a bit more reach. *and seriously, how do you get out so often? I haven't even seen the sky for the last 2 weeks, let alone the stars!
  4. Perhaps not very helpful, but I use NINA and have not experienced a single issue.
  5. Damn those are some well controlled stars! Looking forward to seeing the finished product.
  6. I played around with it a bit at the weekend. It seems to work very nicely, although I think I'll have to go back and re-read the how to documentation a few more times, because a lot of it went straight over my head! My initial thoughts are it seems to give similar results to the way Startools' autodev module stretches data, but perhaps with a little more fine control over how the stretch is executed. Little more fiddling around and learning required, but pleased so far - thanks to both of you for your efforts in developing the script.
  7. For capture, really almost anything will do. USB3 and SSD are a bonus (particularly for high frame rate planetary), but not essential. If you have one lying around, any old laptop from the last 5 - 10 years will likely suffice - I use an 8 year old laptop, the only upgrade l made was to swap the slow HDD for an old 256gb SSD that l was no longer using. My planetary captures tend to be in the range of 20 - 50gb, and my deepsky camera creates files approx 23mb in size, so more than enough space for several nights of imaging (files are deleted from the capture PC once transferred to the processing PC). Processing is a bit of a different story. Here, ideally you'd want the most powerful PC you can afford (or plenty of patience). I'm generally not a fan of working on laptops, so I won't make any specific recommendations, but I think you're in the right ballpark with what you're looking at already. For longer term storage of raw data, large external HDDs can be had quite cheaply, so don't worry too much about the size of the storage in the laptop itself. On AMD: the Ryzen CPUs tend to top the Pixinsight benchmark tests, so certainly no problems with them (and I say this as a lifelong Intel fan!).
  8. Same here, HOO is my preference too (both very nice though!)
  9. Side note @vlaiv: where does the 1.6" figure come from? I tend to follow it when deciding how much to bin my images, but I don't really know "why" in terms of the theory behind it. You've probably explained that about 1000 times already before too 😁 (in fact, you may have already explained it specifically to me before! Apologies if so, my memory can be very selective)
  10. Nice detail from a small scope - I tried this with a very similar focal length a little while ago and got nowhere near the amount of dust you've got here. If I was to make one criticism (and it may just be my phone's screen), I'd say maybe the background has a greenish tinge to it. If you're in the market for a triplet with ~500mm focal length, have you considered the Esprit 100? I think it's probably one of the best triplets in terms of quality vs price. Saying that, the Skywatcher price rises earlier in the year may have changed that somewhat, but I'd still consider it.
  11. You don't have to guide, but it will help when doing long exposures. When properly aligned, the mount will track the object through the sky, but it will not be capable of doing it with the precision required for long exposure astrophotography. Depending on the setup, you'll probably find that without guiding, you'll only get somewhere around 60 seconds before you get star trailing in the image. With guiding (assuming it's all set up correctly), you should be able to expose for many many minutes without a problem. Guiding also allows you to add in a technique called dithering, where the telescope+camera are moved by a few pixels every so often - this helps to remove anomalous pixels from the final image when the data is later stacked (with appropriate clipping algorithm)
  12. Yeah, tell me about it! Every time l got to the end of the process, I'd think "this is the one", then look again the following day and think I could do better! I am officially done with this one now though - almost sick of having to look at it 😅
  13. Thanks! For the colour masks, I used the ColorMask script and also the GAME script, which essentially allows you to draw custom masks of various shapes and sizes.
  14. Thanks, glad you enjoyed it ☺ You're right about number 5, I can't really see the artifacts now - I think maybe a combination of resampling and jpeg compression may be helping me there!
  15. Hello all, A few months back, I captured a not insignificant amount of narrowband data on Sh2-101 (my first ever full narrowband image, too!), but upon processing, I wasn't getting the sort of results I was hoping for from my usual post-process software (Startools), so I decided it was finally time to give in and attempt to learn Pixinsight (something to do, at least, over the long, cloudy November I've had...). I've spent the entirety of my free trial period so far endlessly reprocessing this same image, which is probably a good thing, as it has allowed me to directly compare my various efforts and techniques used (more on that below), and I have to say, Pixinsight is not nearly as scary and opaque as I think it's made out to be - after following a couple of Youtube videos (which netted my some limited success), I found the various tutorials and explanations from Light Vortex Astronomy, Jon Rista and Adam Block to be extremely helpful and I they got me on my way with the basics I needed, coupled with the use of some of the EZ processing suite. Generally, I think I got on quite well with Pixinsight, but it's huge array of tools does leave me wondering whether I'm leaving something on the table when it comes to getting the most of the data, however I suppose this does leave plenty of room to grow and learn new techniques. I'm pretty much sold on it, so will shortly be purchasing a full licence, and I think a copy of Mastering Pixinsight might just be on the Christmas list this year. So with all that said, I thought I'd post up each of my processing attempts (6 in total) to show what I hope you'll agree is some good progress in the right direction. More than happy to take any further suggestions as well if anyone has some tips on how to improve further (+let me know your favourite ). The total integration time 15h40m, made up of 4h05m SII, 5h10m Ha and 6h25m OIII (all 5min subs) with 6nm Astronomik filters and a 294MM. All images have been resampled to 50% original size. ATTEMPT 1 DynamicCrop, DBE (badly done), EZ Denoise on Ha (badly done), straight HSO combine in PixelMath, HT, SCNR, various curves for colours, hue, saturation, etc., LRGBCombination (Ha as L), EZ star reduction, and some other minor tweaks. I thought it was ok at the time (and roughly on a par with what I was getting from Startools), but looking now, it's clearly not great... ATTEMPT 2 Largely the same process as before, with some different curves adjustments. Some elements of this I preferred to the first one, although I ended up with these horrible yellowy-orange stars, and the colour in the Tulip itself looks worse to my eyes. ATTEMPT 3 After a bit more reading up on some techniques, and playing around with different ways of stretching I ended up with this, which I felt was a significant improvement on the first 2 attempts, and revealed a lot more of the faint stuff I was looking for, but ended up far too yellow/orange/brown overall, and I still wasn't really happy with the results of EZ denoise. ATTEMPT 4 After reading some Light Vortex tutorials on noise reduction, I had a stab at doing it manually instead of via EZ denoise, as well as using colour masks to selectively alter colours. I also resisted the temptation to use SCNR, to try and prevent essentially turning it into a bicolour image (plus I quite liked the greenish hues), however I subsequently decided the overall colour was far too gaudy, and I still wasn't quite happy with the noise reduction in parts, plus, I found the stars somewhat overwhelming. ATTEMPT 5 Completely back to the drawing board on this one: read up one some noise reduction techniques from Jon Rista (TGV & 2 passes of MMT, followed by iterative stretching and ACDNR), played around with different PixelMath combinations to try and get me closer to the colours I wanted without having to do as much subjective adjustment later on. Also made use of Starnet to remove stars and add them back in from a less stretched version later on. This was going to be my final version, but when I started looking closer, I think I went a bit too far with the noise reduction, and although I like the stars, I don't think they sit well in the image (some dark ringing visible). Plus, I decided I wanted to try some deconvolution on the Ha layer as well. ATTEMPT 6 Sixth and final attempt (well done for making it to the end, if indeed you are still reading!): backed off on the noise reduction, EZ Decon, previous attempt was used as the base for the colour layer, with some minor tweaks and stretched the star image a bit further which seems to have allowed them to sit a bit more naturally within the image. In general for these last two attempts, I tried to resist the temptation to turn the colour up to 11, and tried for something a bit more neutral (for lack of a better word). Sorry for the extremely long post, but thanks for reading, and let me know your thoughts/favourite! Cheers!
  16. Just a guess here, but it could be a stacking artifact caused by the fact that the edges of the individual frames do not perfectly align. Was this done with a tracking mount or on a static tripod?
  17. The speed of the storage space shouldn't matter, it should still keep going until it has saved the desired number of frames (A while ago I was doing some planetary with an old, slowwww HDD - the software dropped more frames than it saved, but it never stopped capturing until the target number of frames had been saved).
  18. Yep, I'll be hearing the Jaws title theme in my head for hours now!
  19. From the look of it, I'm taking a punt at Startools having been the software used. It tends to give darker backgrounds than other software, but the wipe module usually works really well, so I'd be surprised if it wasn't effective at removing any gradients in the background. @Grant93: assuming you did use Startools, did you make use of the superstructure module at all? I found that it can sometimes create the appearance of fairly aggressive noise reduction.
  20. This is probably my favourite image I've seen of this target (I agree with you about how it's normally represented: bland pinkish-red blob). The puritans may scoff at the colour rendition, but I think it makes it much more interesting. You've inspired me to add it to my own list for the future. ☺
  21. Don't get me wrong, I agree with you (and in fact base my own settings on the advice in the videos you linked), but that doesn't mean I'm not interested in seeing the results of someone else's work.
  22. I'm all for experimentation when it comes to things like this - that is the scientific method after all. I look forward to seeing the finished product 👍
  23. You may be able to get away without using any darks, however they should still provide a benefit by removing bias signal, correcting hot pixels, etc.
  24. Agree. Expose until background sky level swamps camera read noise. Once that point is reached, exposing for longer doesn't really gain anything, plus it may saturate a (relatively) large number of pixels, losing colour information in stars and bright areas.
  25. All calibration frames (that I assume you've been taking for your 1600) still apply, and the process of doing it is no different - if anything, slightly easier because you won't have to do seperate sets of flats and flat darks for different filters.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.