Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

endless-sky

Members
  • Posts

    168
  • Joined

  • Last visited

Everything posted by endless-sky

  1. I was suggesting 10 as a swamping factor because I was under the impression that it meant that the read noise in the final stack only contributed 5%, when compared to the other noise sources. Does that correspond to your calculations, Vlaiv? Of course, if 25 would be 2.5x better, so the 5% becomes 2%, that's even better. But the more we push the exposure, the less dynamic range we are left with. So we have to compromise. To me, 5% left read noise in the total stack seemed like a good enough compromise.
  2. I use KStars and EKOS (because I control my equipment with a Raspberry Pi). Their FITS Viewer give you the mean ADU of the picture, as soon as it's completed and downloaded. So I can compare the ADU of the actual shot to the theoretical ADU that I know my camera works best at. For Windows, maybe N.I.N.A. offers you the option of seeing the ADU of a RAW shot, without stretching it/converting it. But I haven't ever used it, so I am not 100% sure. This is the actual formula to calculate your ADU: DN = (Nread^2 * Swamp / Gain + Offset) * (2^16/2^Bits) Where: DN = required background signal in 16-bit DN. Nread = read noise in e- Swamp = swamping factor Gain = camera gain in e-/ADU Offset = bias offset in ADU Bits = ADC bit depth Credit: Jon Rista. You'll have to determine first what the optimal ISO is for your camera (I don't have any experience with it, so I am not sure). Then find the corresponding Gain (in e-/ADU) and Read Noise (in e-). The offset is the mean ADU value you get when you take a bias. Swamp should be 5 (for minimum) and 10 (for maximum). Hope this helps!
  3. Hi, the "fast" rule is to expose until the histogram peak, as seen from the back of the camera, is 1/4 to 1/3 detached from the left. This ensures you are rendering the read noise contribution of your camera/chosen ISO to a minimum percentage in your final stack. The "long" rule is to determine the read noise of your camera at your optimal ISO (photonstophotos.net is a great resource for this, or you can use BasicCCDParameters script, if you have PixInsight), and then expose such that the mean ADU is around 5*RN^2 to 10*RN^2 (these are the limits I usually try to respect). Lower exposures imply that you do not swamp the read noise enough and that you won't be able to pull out all the details that your sky conditions allow for. Higher exposures imply that you are blowing (clipping) too many stars and also reducing your total dynamic range (because you are shifiting the histogram to the right, giving it less room to stretch).
  4. Hello, for those like me who wouldn't be able to view the event of the century (or of a lifetime, for that matter), let alone image, due to our beloved weather that couldn't pick a better moment in time to be full cloud coverage for a 300 miles radius of where you live, Telescope Live is hosting a free live broadcast of the event, from various locations in the World (hopefully one of them will be cloud free), both on YouTube and on their own platform. I am by no means affiliated with the website or the services they provide. Just thought I would share a chance to view the event for people like me that would have gladly gone out and see it with their own equipment, but cannot because of "beautiful, cloudy, foggy, humid skies". Here's the link with all the details. I know I'll either be outside (if I get an early Christmas miracle) or in front of my PC screen watching this. Matteo
  5. Thank you very much for the kind comment, Lee!
  6. I would like to share my second image. This is IC 1499, also known as the California Nebula, taken over 4 nights, under my Bortle 5 home sky. Total integration time: 10h 21m 00s. Here are the acquisition details: Mount: Sky-Watcher NEQ6 Pro Telescope: Tecnosky 80/480 APO FPL53 Triplet OWL Series Camera: D5300 astromodified Reducer/flattener: Tecnosky 4 elements, 0.8x Guide-scope: Artesky UltraGuide 60mm f/4 Guide-camera: ZWO ASI 224MC 2020/11/06: Number of subs/Exposure time: 23@180s + 41@240s. Notes: L-Pro filter, Moon 67% illuminated 2020/11/08: Number of subs/Exposure time: 40@240s. Notes: L-Pro filter, Moon 46% illuminated. 2020/11/09: Number of subs/Exposure time: 40@240s + 4@300s. Notes: L-Pro filter, Moon 35% illuminated 2020/11/12: Number of subs/Exposure time: 12@240s. Notes: L-Pro filter, Moon 8% illuminated Total exposure time: 37260s = 10h 21m 00s. Pre and post-processing: PixInsight 1.8.8-6. Here's a link to the full resolution image: California Nebula (IC 1499) Thanks for looking! C&C welcome!
  7. Can be many things. Polar alignment error would get you drift in DEC. Interrupting guiding would also allow for drift, including RA periodic error, where the mount accelerates/decelerates the sidereal rate due to gear imperfections. Then there is the infamous differential flexure - which I experienced and had to fight first handed, with a guide scope. Differential flexure can not be completely eliminated (with a guide scope setup), but only limited enough that it doesn't show in your image scale with your normal exposure times. I got it tight enough that I can comfortably shoot 600s subs with same star shape as a 60s sub. However, it's still there. If I blink several 600s subs without star aligning them first, the same star will eventually move quite a few pixels. You also mention dithering. This itself will cause movement between images - that's its main purpose, anyway: to create a random movement in RA/DEC of however many pixels the user deems necessary. Using a DSLR, I usually dither 10-15 pixels (in the imaging camera) and obviously the last picture will never have the stars in the same positions as the first picture. But that it's true for every dithered frame.
  8. Thank you, Adrian. EZ Processing Suite definitely has its perks. I know doing things manually usually produces better results, but in the mean time, while I learn to do things on my own, it can't hurt to see what results my data is capable of, using processes already created by more advanced and skilled people than me. At least I know what to aim for. As for you moving, lucky you! I am in a Bortle 5/6, as well, and I only got a chance of shooting from a Bortle 3/4 once, in February, from the mountains about 2 hours away from where I live (we spent the night there). I shot M42 with the same equipment I used from home (D90 + 70-300mm zoom lens at 300mm, on the NEQ6 Pro). Same total integration time (about 1 hour) and the result was much, much better than the same 1 hour taken from home (same gear). I also tried combining the data, but the sum of the two sessions was worse than the single integration from the darker sky - so you are definitely right about that! I wouldn't throw away your previous data, though, if I was you. One day you'll want to look at it, to see where you started and how far you have gotten. I keep all my pictures (the final results from post-processing and the original RAW single frames, just in case I want to reprocess them from scratch, to test how much my processing skills have improved over time - I throw away all the intermediate calibration files). Hard-disks nowadays are not so expensive. Keep your old data, it always holds some value, as a reminder of how many nights you spent out there in the cold, memories that cannot be deleted. Is the new data from the darker sky going to be better? I can bet, given my own experience. Can it replace your old experiences/memories if you delete them all? No. It can only give you new ones.
  9. I tried splitting the channels, but the results are even worse. Looks like there's a lot of work to do, I'll definitely try harder on my next image, maybe working in StarNet from the beginning of the workflow. DSLRs stars can really be overwhelming, especially with no narrowband filters. For now, I took the easy (or rather, EZ, as in EZ Processing Suite, for PixInisght) way out and gave EZ Star Reduction a try, directly on the final step of the image posted at the beginning of the thread. I have to say that I am really impressed by the results. If it can do such a good job at the end, after all the stretches, saturation increases, curves, etc., it might be worth a try using it also in the initial stages of the workflow. Here is the result with EZ Star Reduction (Adam Block's De-Emphasis Method - tried Morphological Selection, too, but I got worse results): And here's the original, just so you don't have to scroll all the way to the top: I think the star reduced version is much better, stars are not as overwhelming and the nebulosity stands out more. If I don't manage to get "manual" star reduction to work, I'll definitely keep using EZ Star Reduction in my workflows!
  10. Unfortunately I don't know anything about NUCs. If, however, you are willing to do without Windows, a (very cheap) alternative is using a Raspberry Pi 4 4GB (which I have been using for quite a few months, or even better the newer 8GB version) and the KStars/EKOS suite (planetarium plus everything you'll ever need to control all your gear). If you are not much into Linux, there's a precompiled image/package (Astroberry) that comes with KStars/EKOS already installed and a lot of other astronomy related useful software. All you need to do is flash the image on a Micro SD card and attach a mouse, keyboard and monitor (needs to have HDMI input). If no external input is used, the Astroberry defaults to hotspot when you boot up and preconfigured with VNC Server. All you have to do is install VNC Viewer on another computer and then you'll be able to remote desktop into the Pi and configure your software/gear. Default IP of the Astroberry can be found on the Astroberry Wiki. As I said, I have been using this solution for quite a few months (since March) and never had any issue with any of my gear (Sky-Watcher NEQ6, Nikon D5300, joypad, ZWO ASI 224MC, etc.).
  11. Certainly an approach worth pursuing. I am still in the middle of the working week, so I probably won't take a serious stab at this before the weekend, but I plan to reprocess the whole image taking all your tips into account. Fortunately I saved some intermediate steps, one of which is exactly the one needed: end of step 2), before going non-linear. So I can start from there.
  12. I keep repeating myself that... 😅 I'll keep trying, I know I am stubborn enough to keep going at something, especially when I think it's worth it. And I do think this technique is very well worth it. If I manage to find the combination that works, this could open up so many possibilities. One of which could even be taking a series of short exposures just for the star field and replacing the one from the original picture entirely. Goodbye saturation problems - M42, M45, the Horsehead Nebula, just to name a few examples. Plenty of cloudy/foggy/humid nights, in this period. So time for processing is something I have in abundance.
  13. Ok, making some progress. However, it seems the moment in which one decides to use StarNet is of vital importance with respect to the final result: if done too early (not enough stretches), then it leaves artifacts behind that get only worse and more evident with the following stretches. If done too late (after too many initial stretches), the stars will be quite "fat" and when StarNet removes them, they will leave "black holes" behind, where they were supposed to be. The end result, in this case, is that when I paste the blend layer (the linear duplicate with the stars), after minor stretches to just bring out the stars without making them too fat, they will all have a dark halo around them (the black holes left by StarNet in the starless image). I think I got the gist of it, now it's just a matter of finding the correct balance between how much or how little to stretch before applying StarNet, and how little or how much to stretch the blend layer before pasting it back. Definitely not easy...
  14. Thank you very much for the invaluable info, Adrian. I will try your method for sure - matter of fact, I just got back from work and I am going to give it a try right now. I'll follow along your workflow and update you on the results. In the likely event that I make a mistake somewhere, I'll post here the problems encountered. As far as pastel vs "in your face", I also prefer pastel, subtle colors. I hope I manage to show that with my picture. A previous one I post-processed, a very wide field of the Cygnus area (taken with a 50mm prime lens), was way oversaturated, compared to the photo I posted here. Looking back at it, I don't like it at all. Time for some post-processing, now... 🙂
  15. No worries, thanks for answering and very nice image, by the way! I don't know if I understood correctly, but I'll write how I understood it and apply it to a DSLR (no narrowband filters, just light pollution filter) workflow. Then, if you have time and are willing, you can tell me if I got it right. Here's my current workflow: 1) - Crop, DynamicBackgroundExtraction, BackgroundNormalization, ColorCalibration 1b) - Deconvolution: as I said, I mainly give it a try, but end up discarding the results 2) - TGVDenoise, MMT (yes, in linear, but I found that following Jon Rista's tutorials, it's the only way my background does not end up having an orange peel effect) 3) - Stretch (I usually just apply the ScreenTransferFunction autostretch to HistogramTransformation, seems to work quite well) 4) - More denoise if needed (ACDNR, TGVDenoise and MMT again: still going by Jon Rista's tutorials) 5a) - Adjustment of black point with HistogramTransformation 5b) - CurveTransformation for contrast/improving saturation 6) - DarkStructureEnhance script if needed ?) - Star erosion: up until now I didn't really know where to do it/how to do it effectively After reading your suggestions, this is what I would do. I have one image at the end of step 2), let's call it 2A. I barely stretch 2A, with HistogramTransformation (I assume? - or with CurvesTransformation, by doing an arcsinh curve?) by bringing to the left the midpoint, until I see as many stars that I am happy with having at the end. I duplicate the image and call it 2B. I remove the stars from 2A with StarNet. I continue with 2A, doing the rest of the workflow: 3) to finish stretching it to its full potential, 4) to 6), etc. I now blend back in the image with the stars (2B) using PixelMath lighten equivalent expression: max(2A,c*2B), where 0 < c < 1 (as you said, needs experimenting with the coefficient to find an appropriate blend). Does this sound about right? Thanks again and I hope everything goes well with the legal battles you are facing. Matteo
  16. Nice shots, Ruz! Very beautiful images. Answering your question: so far, the Heart Nebula (IC 1805), just because it's the first image I am truly happy with since I have started down the road of digital astrophotography, at the end of January of this year. New, dedicated telescope, as opposed to a bad performing kit 70-300mm zoom lens, finally gave me the quality I was after. However, for sentimental reasons, it has to be the Orion Nebula (M42), since it is the first DSO I have ever observed through my own telescope, when I was a child.
  17. Sorry, one more question. If I remember correctly, the image needs to be in its non-linear form for StarNet to work. How do I remove the linear version of the stars, if I need to stretch the image before obtaining them? This would inevitably stretch the stars, as well, which is what we are trying to avoid, in order to paste them later, with a more subtle stretch.
  18. Shouldn't leave your precious scope out in the rain. This sets a bad example of how not to take care of your hard earned equipment! 😅
  19. When you'll get to the astrophotography part of the hobby, that's when the cash flow (out of your bank account, not in, mind you) will really start to get astronomical... 🤣
  20. The mount (NEQ6 Pro) and the imaging rig (Tecnosky 80/480 APO + guide scope / cameras) are in a corner of the living room, by the front door, ready to be taken out at a moment's notice. The C8 / rest of the gear (imaging laptop, cables, power supplies, tracing pad for flats) are in a cabinet, close enough to the mount and ready to be taken out as well. I used to dismantle everything and put the tripod, mount head and telescopes/accessories out of the way, in a seldom used room, so that they were completely out of view. But thankfully I managed to convince the wife that if I wanted to get any imaging done, the gear needed to be close to the front garden and mounted in as few blocks as I could safely move around as possible, otherwise it would have ended up unused and basically useless.
  21. Glad I could be of help! It's always nice to be helped, but it's just as nice to be able to give back/help others. Even if I only started about 9 months ago, I try my best to spread the little knowledge I have acquired so far. I'll try your workflow, as you mentioned. I usually do a second pass of noise reduction after the stretch, in the non-linear phase, but that's just to smooth out the background after the stretch/curve transformations. I guess that since these won't be applied to the star layer, the second noise reduction shouldn't be necessary, either. I have also tried many, many times with deconvolution, but I don't seem to have found the "magic" settings for it, yet. I can get the process to make the stars tighter and bring out more details on the nebular features, but it also introduces a sort of "orange peel" effect on the background. I would definitely like to learn how to make it work, because I feel it's a very powerful tool to have in one's post-processing tool box. Thank you for the appreciation of my image, I'll keep posting as I capture more, now that I finally have optics that perform as I like. You don't want to see what I captured in these past 9 months with the 70-300mm zoom lens. Unless you want to see comets, instead of stars... I also saw (and liked) your H-alpha image of the Heart Nebula and I must say it's very impressive and nicely processed! Matteo
  22. Thank you again for pointing me to this link, it seems like a very powerful method. I really like the idea of giving a small stretch to the linear part of the image with only the stars, before adding it back in to the starless image. This way, the stars don't go through all the heavy stretching the rest of the image will go through, for obvious reasons. I assume the same noise reduction routines need to be applied to both, though, otherwise you will see the difference in how smooth/soft the pixels in the stars are, compared to the rest of the image. I guess some small amount of convolution could be done to the star layer as well, before applying it, to soften the transition between the star cores and the background. I'll be sure to try out these techniques the next time I process an image, or if I reprocess this one of the Heart Nebula. Plenty of humid/foggy/cloudy nights in this period, unfortunately, so might as well invest the time increasing my post-processing skills. Since I would like to do all my post-processing in PixInsight, is there an equivalent to adding back the star layer with a "lighten" (or "screen", as suggested by Kyle, above) option, but using PixelMath in PixInsight? It would really be awesome if it could be done without having to change software. EDIT: looks like I answered my own question. For who might be interested, here's the list of the blending mode equivalencies between Photoshop and PixInsight PixelMath. Credit: Rogelio Bernal Andreo.
  23. My all time favorite is Orion. M42 was the first deep sky object I saw through a telescope, when I was little. If I had to break it down in seasons: Spring: haven't found anything yet. My mount and telescopes are not good enough for galaxy imaging, and unfortunately galaxies is all that Spring seems to be full of Summer: Cygnus, Sagittarius and Scorpius Autumn: Cassiopeia and Auriga Winter: Orion and Monoceros
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.