Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

Annehouw

Members
  • Posts

    97
  • Joined

  • Last visited

Everything posted by Annehouw

  1. I have experimented with that, and they do work. There is a slight downside in that it will take away a bit of contrast. In the end I decided not to use them. I do not dislike spikes anyway. There was a discussion on this a while back for use in a alt-az telescope. As spikes rotate in that configuration, there is a need to fix it in that case. Discussion here: https://pixinsight.com/forum/index.php?threads/diffraction-spikes-rotation-on-alt-az-telescope.14364/#post-88091
  2. @scotty38 These halos might be because of the filter, or something else in your optical train or a combination of this. I have had reflection problems that went away by placing the filter behind the reducer/flattener instead of in front. I wrote on my experiences with dual narrowband filters and halos here: https://www.cloudynights.com/topic/805014-hfg1-and-something-on-dual-narrow-band-filters/
  3. I was a bit lazy in my wording on "dust lanes". Many of the dark structures seen in the image could just be differences in star density. In general Globular Clusters are made up of a rather homogeneous collection of very old stars with not a lot of elements heavier than H and He ("metal-poor"). If there is dust and whether the dust is foreground or in the cluster itself is a bit tricky to answer. The best way to go about this would be to look at it in infrared wavelengths and look for dust re-emission of starlight. The dust in the disk of the galaxy is heated to about 20K due to the overall radiation. In Globular clusters, the stars are tightly packed and as a result of that there is more intense radiation. Dust (if any) in Globular Clusters should have a temperature of 50K-80K by that heating. So, the IR signatures of dust in Globular Clusters and that of dust in the Galactic Disk should be different. Whether this has been done for M13, I do not know. I did find that the Spitzer IR telescope has had a look at M15 and indeed found dust, but it is a very minute amount (something about 0.0001 solar mass for the whole of M15). The most likely source of that dust is the wind of stars in late evolution stages (Giants) where a lot of material of the outer shell is blown away, which will contain some dust. As for dwarf galaxies, they have a more mixed polulation of stars and also a lot more gas (and a complete different evolutionary history. Hope this is interesting and answers your question
  4. @Richard_ I do not use Starnet anymore, but StarXterminator instead. I did try StarXterminator after GHS, but it left white dots everywhere where stars were removed. I think that the GHS starshapes deviate too much to the starshapes on which the model has been trained. Best to use a mild HT for that purpose. (StarXterminator als works on linear images in PI, but the results are slightly inferior compared to removal after a mild HT).
  5. To give a better idea of what GHS can accomplish, I made a compilation of 4 stretching methods: Generalised Hyperbolic Stretch (GHS), Masked Stretch (MS), Histogram Transformation (HT) Arcshinh stretch (ASH), which is actually a combination of a very mild HT stretch followed by ASH (otherwise it becomes a bit ugly...). The stretches are not 100% equivalent, but it gives an idea. The inserts show the bright yellow star on the right and a star to the left of this star. As can be seen, GHS looks a lot like MS. MS retains a bit more color, but the downside of MS is that it has created a harsh plateau in the star brightness, resulting in a distracting dark ring in the bright star. HT puts a lot of light into the halo and gives a relatively contrasty image (to be corrected by Local Histogram Transormation or HDRMT). ASH is the colorful one. That is nice. Unfortunately, it also gives rise the the scary phenomenon of "rainbow stars", which you can see in the inset of the smaller star. This is an artifact that is very difficult to correct. For the "final" M13 image I used the GHS stretch as a basis followed by stretches to bring a bit of contrast into the image, increased saturation on the globular cluster and a bit on the stars, desaturation on the background and local stretches to bring out the deteils in the core of M13.
  6. I had a go at it for an image that I still hadn't processed. To be honest, it took me a while to get a feeling of the interplay of the different parameters, but the video helped tremendously. I used GHS for the basic stretch and did a bunch of global and local stuff afterwards to bring out color and dust lanes (there's the low contrast "three blade propellor" -two o'clock position close to the core-, but many others as well). I will chew on this rendition for a while to think whether and how much it is overprocessed or not, but I would like to say thank you to Dave and Mike for adding another tool to the toolbox! M13:
  7. Thank you, @pietervdv I had planned to go for 100 hours. Just for bragging rights, but I gave up. To be honest, I have no desire to dedicate such amount of time to one subject again.
  8. @Sunshine Thanks. It is a planetary nebula. The particular shape is due to the high speed of the star that creates the nebula.
  9. More than two years ago, I replied to a request for suggestions on "weird Ha objects" by @Astrosurf here: https://stargazerslounge.com/topic/337304-weird-ha-objects-suggestions/ Two years and 86 hours of exposure later, I have finished doing myself what I suggested to do. This must be the faintest object I ever imaged. It is hardly visible on individual subs from Bortle 5 and it is highest in dark(ish) skies when weather over here in the Netherlands is at its unstablest (late autumn). OIII is faint, and Ha is even fainter. This is not a subject you can shoot next to a shiny moon; in that respect it is best to image it as you would do a non-narrowband subject: With as little moon in the sky as possible and high in the sky. At the time, I called it a "cool subject ". Mainly because of its name, but also because of the way it looks. After finishing the image I did a bit of internet digging to find out more on current research on this object and it turns out to be "cool" also in the astrophysical sense. The write up on that, acquisition details and a bigger version of the image can be found here: https://www.astrobin.com/bd33ox/
  10. Resurrecting this old thread... Shortly after I first published this image of M63 showing the tidal stream, I was graciously given many hours of images by Oliver Penrice. Thank you very much, Olly! At that time, I was experiencing sort of a processing burnout. I had spent way to many hours behind the screen to bring out M63 and its halo and tidal stream. I have recovered though and I integrated this data into my earlier image. I have attached a inverted black and white version here for the sake of visibitity. To celebrate this occasion, I did a fairly extensive astrophysical write-up on the subject of tidal streams. I tried to keep it popular, but it does cover a lot of ground in a few short paragraphs. You can find the final colour image and the write-up, with lots of links to further reading here: https://www.astrobin.com/ru40fg/F/
  11. Hi ramxis Have a look here https://darksitefinder.com/maps/world.html#8/50.179/9.877 Even better here: https://www.lightpollutionmap.info/#zoom=7.80&lat=50.1755&lon=9.2515&layers=B0FFFFFTFFFFFFFFFF Frankfurt is bad, but if you go to the east a bit, it gets a lot better. Have fun! For your smartphone, there is also this: https://www.darkskymap.com/ Fun thing about an app is that you can turn on your gps and see how far you have to go from your current position. White is bad, red is bad, yellow is better, shades of green are quite OK, blue is even better and dar, dark blue is fantastic (this colour s heme both for the website and the app)
  12. Hi Dave, Yes I did try that. It was my first go at this and it would work well once you have isolated the Ha excess signal. Without doing that...since the core of the galaxy is quite bright in Ha, just adding the Ha image to the RGB image resulted in a red core area. So that was a dead-end for me. I guess you could do the continuum subtraction in PS through a set of subtract blend mode operations, but I suspect that this is one area where PI is less complicated than PS 😉 I do use PS in my workflow as some things are (like you say) just so simple in PS. First of all to clean up masks I make in PI. The clone tool in PI is clumsy. Futhermore, I do like to make image versions having their own strong points in PI and then layering them in PS, play with opacities and mask at will (either using the masks I made in PI or just using PS masking).
  13. I have written a short description based on this image and the physics happening around the core region of M106: https://www.astrobin.com/uk8p8v/ It is for those that are interested in the physics of outer space (as I am). In this forum, I would like to say a bit about the image processing part as that might be of interest to some of you using PixInsight. Note that I also posted this on CN....... The goal This image was processed with the plan to show the main features of the interesting activity that M106 shows around the core. As such, saturation and color contrast have been turned up quite a bit (masked). Ha addition to RGB galaxy data The anomalous arms (see Astrobin for the explanation) are fairly easy to see in a Ha image, but getting the delicate gas streams well defined added to an RGB image proved to be very finicky as the contrast between them and the red background signal is not that high. Furthermore, this is an image made with an OSC and a STC-DUO multi-narrowband filter. The Ha signal is only captured by the red bayer filtered pixels, hence diminishing the spatial resolution of the Ha part. I struggled quite a bit and after some experimentation with adding Ha in the non-linear phase using image scale operators, I ended up with a manual subtraction of the red continuum in the linear phase, as described in an accessible way by Eduardo Luca Radice http://www.arciereceleste.it/tutorial-pixinsight/cat-tutorial-eng/85-enhance-galaxy-ha-eng . I used both a "clean" Ha derived image and a range mask from that "clean_Ha" image to add the clean Ha contribution to the image. Note that the continuum subtraction method only works with linear data! Using this on non-linear data turns the whole area red. There is also a CS (continuum subtraction script) script that should do the arithmetic of the Ha fraction to the red signal, but that did not give me an acceptable result.I do not know why. Star Halo Reduction My telescope/filter/camera combination gives me prominent halo’s. I used the method described by Nuno Pina Cabral to reduce the worst of them. You can find the method here.: http://nunopinacphotos.blogspot.com/2016/03/how-to-remove-star-halos-using-wavelets.html Note that I used HDRMultiscaleTransform instead of ATrousWaveletTransform (or MLT/MMT for that matter) as it gave me somewhat cleaner results. The beauty of the method is that it tackles all halo’s in one go. EZ Processing This was the first time I also used the EZ_suite noise reduction and deconvolution scripts. Works brilliantly (although I did have some PI crashes, but I had been warned….). What a nice concept that is. I hope that this description adds something to someone's PI journey. Processing this image took nearly as long as the total exposure time as I had to find my way into the methods above and because I made a lot of image versions varying the amount of Ha contribution before settling on this one. Equipment and acquisition details,, together with a larger version of the image and the physics explanation on Astrobin.
  14. Let me give you a contrarian view and advise you against the L-extreme. Looking at the fact that you are planning to image with the hyperstar, you will be living in fast focal ratio land. And everything is a little bit different there (and sometimes a lot...battling tilt comes to mid). In the case of narrowband filters there is the issue of bandpass shift and resulting drop of transmission off center. The L-extreme has narrow bandpasses that wil make it's transmission at f/2 severely impacted. This is a subject that comes along every so often. An authorative discussion on this can be found here: https://www.cloudynights.com/topic/734419-radian-triad-triad-ultra-performance-vs-f-ratio/#entry10585863 Long story short: using these very narrow bandpass filters will give you the contrast between nebulae and background, but will reduce the light gathering efficiency of the scope (in the case of scopes faster than f/4 -f/5). Having said that: my advice is to get yourself the somewhat broader version of the L-extreme, the L-enhance.
  15. Holy smokes! That must be one beast of a machine. A stacking run of 400 subs in APP can easily take 4 hours on my machine (i5-2500k / 32gb ram), 16MB subs.
  16. As an add-on to the original question: APP has promised GPU support to speed up processing. Currently, it only uses the GPU for (re)drawing the image. There is no pointer as to when real GPU support will be implemented. PI has promised GPU support (since 2015). Currently it has none, but I found a reference from Juan (the developer) to "hopefully" before the end of this year. StarTools has GPU support (1.7 alpha) PS/Affinity Photo all have very limited GPU support as far as I know CPU performance still is very significant if you want speedy processing across the board. I have written elsewhere in this forum what GPU acceleration in StarTools does on my PC. Still, when building or upgrading a PC, I would suggest to balance the expenditure: A modern CPU with a lot of cores (AMD 3700x comes to mind, or 3600 if you want to keep it a bit cheaper) as APP and PI do make proper use of multiple cores A decent GPU but not the latest and the greatest as they are a) very expensive and b) hard to get 32 GB of RAM as a sweet spot SSD for OS and SSD for image and swap space
  17. Thanks for the suggestion, Ivo Actually it is LGA1155 (with i5-2500K). I had never thought about upgrading it with a Xeon processor. Since Moore's law stalled I have not really looked into speed, aside from SSD and extra memory. It is my professional production machine and runs very stable, so not so inclined to tinker with it. Recent developments (as in Zen3 today) might however persuade me to plan a motherboard/CPU/memory upgrade at some time in the near future.
  18. One of the best insights into filter performance I know of is from Jim Thompson. Here: http://karmalimbo.com/aro/reports/reports.htm you can download (the link "multinarrowband filters") actual spectral measurements of bandpasses and transmission percentages as a function of angle of incidence (page 16: translated to f/ratio). It is specifically for multinarrowbandfilters, but the same principles apply. At the bottom of the page I linked above, you can find a comparison between the popular Optolong L-Enhance and L-Extreme filters (the latter currently made of unobtanium). At page 11 of said report he draws angle sensitivity graphs for several spectral lines through each filter. Very interesting reading material. Jim posted his findings on CN some time ago.
  19. I couldn't resist and did the comparison, but had to bin the image 50% to keep my sanity. So now, on a 2420x1520 image: Startools 1.7 non-GPU.: Deconvolution default settings: 165 seconds Startools 1.7 GPU edition: Deconvolution default settings 15 seconds That is a massive 11 fold speed increase.
  20. It is probably a good idea to forgo the GPU utilisation discussion and look at what it means in practice. To that end, I loaded this image (4851 x 3044 pixels) and measured two actions with a stopwatch: Startools 1.6 ( CPU only: 10 years old Intel i5 ) Autodev (with ROI): 13 seconds. Deconvolution (PSF Moffat Beta=4,765, radius=2px, 7 iterations, rest default) 132 seconds Startools 1.7 (CPU i5 + GPU: NVidia GTX1070) Autodev (with ROI): 3 seconds. Deconvolution (same settings as above, rest default) 39 seconds So on my system 1.7 is 3 to 4 times faster than 1.6. Really Impressive. P.S: I bought my video card second hand from a crypto miner after the crypto-mining rush was well over its peak, so I got a good deal on it.
  21. Hello Ivo, Nice of you to comment from down under! I do not know if a full discussion on the internals of Startools is appropriate here or better on the Startools forum. I have not posted my comments there. What I see can still be novice user error. The main takeaway here was that I was impressed with Startools. Having said that: GPU support First of all: other software developers have promised GPU support for some time; you are delivering. I am aware that it is not a trivial undertaking. So hats of for that! Having measured non-GPU-assisted Starnet++ and GPU-assisted Starnet++, I measured (in windows task manager) a full load for 25 seconds before completing the task. And that made me very happy because that is 5x speed increase. Now, I do not know the internals of Starnet++ processing or Startools, so like you say it might just be the nature of the computation that makes this so. Anyway, I did measure a deconvolution in Startools (1.7.422 alpha), but now a bit more precisely using Gen-Z. I have attached a screenshot and as you mentioned, there are indeed 100% utilisation spikes with 30-40% bumps as well (disregard the 1% number, this is just the value at a later time of screen capture). Star masks I did use the automated starmask generation when prompted, but I also used a starmask using the "stars" and "fat stars" options with old mask set to "add new to old". This renders decent starmasks. Tinkering with that (growing a few times and then shrinking) did produce square dots however. I think this is just my lack of onderstanding how to manipulate the mask feature. But there is more (and that is not limited to Startools): My stars have spikes and halos. This is because of a 3 element Wynne corrector in front of the sensor, filters and less than perfect AR coating on sensor and camera. See the attached image. As a result most of my stars look fattish and star control is one of my most difficult challenges. Startool masks (and PI masks for that matter) generate round stars. The best way to date for me is to do a rather convoluted round of processing: - (Auto) Develop the image using a ROI to get somewhat contrasty stars. - Use Starnet++ to generate a starless image. This is never 100% correct because Starnet++ was not trained on my type of star shapes and a lot of brighter stars and their halos remain partially in the "starless" image. Sometimes this looks rather ugly. -In Photoshop: Remove the remaining partial stars, halos and other artefacts from the "starless" image by using healing/cloning and such. -Subtract that image from the original image (can be done in Photoshop as well) to get a fairly accurate star mask representative of my actual star shapes. - Curves or levels to tighten the starmask if needed - Bring that mask into Startools (or any other imaging processor) So, this is highly specific to my situation and most people most probably can get by with automatically generated star masks. I have not found the holy grail yet. In my experience, star control has a lot of impact on the appearance of the image. It just looks a lot cleaner. I did use the shrink tool in Startools btw and that works fine. Having a really accurate star mask for me remains to be key. But that is just my personal opinion in trying to get the most of what I spent 20+ hours to image (in a cloudy climate). I hope this clarifies my "starmask" remark. Met vriendelijke groeten, Anne
  22. I have always been interested in Startools. It has a very distinct philosphy of how to process astro images. I have a license since version 1.3 (many years ago), but have never been in love with the product. At the time I was imaging with a 50 megapixel dslr and that was way too much for the processor hog that Startools is. It does a lot of computational analyses and it just froze. Now, with version 1.7 (Alpha at the moment), there is GPU support. I have a decent GPU (GTX1070) for video editing and I decided to give it another look. I am proficient in Photoshop; am somewhat intermediate in PixInsight. I had to start from zero with Startools. I must say that I am very impressed. I gave it an image from my QHY168C, taken last spring. After one or two video tutorials and a read through the forums, I was on my way. It is very easy to get a decent image out of it, even using all the defaults. Just follow the course of the tabs on the left hand side and there it is. Of course that does not give the optimum result but 80% I would say. The hardest part for me was to generate good star masks where needed (that is the same issue in PixInsight). Attached is the result after two tries using Startools alone. Most of the processing time is waiting for the computer to do its magic (GPU support helps, but I measured utilisations between 6 and 10% so there is room for more speed). I am sure that with time and more experience I can get better images out of Startools, but for now, I took the RGB image, developed the Ha image in Startools as well, generated a starless Ha version with Starnet++ (from within PI and configured for GPU support) and also generated a starmask in Starnet++ from the RGB image for use later on. I took all of that, loaded the images and masks into Photoshop and did a layer combination of RGB and Ha and some color and contrast tweaks (pure subjective and to my liking). You can find the final (Startools+ Photoshop) result and the acquisition details here: https://www.astrobin.com/6mnkki/0/ I will continue to use AstroPixelProcessor for stacking, PixInsight and Photoshop for post production, but will certainly dive deeper into Startools. It is a great tool to have in the kit. And..for beginners it is a really good way to start. Defaults work out of the box and you can tweak a lot of parameters when the experience grows.
  23. Thanks, all! @Craney: As this practice of long-boating and flaming has used up most of the original forests on Iceland, I took the remnants home and I Frankensteined it (super glue, no stiches). Unfortunately its character was not as benign as the original, so I had to kill it off unceremoniously.
  24. A bit of a sentimental title, agreed. I used to go to Iceland in winter quite regularly. Walking through ice caves and hunting the northern lights. Maybe this routine will return some day. Anyway: If you have witnessed the aurora you know that it can change quite rapidly. In this picture some highlights from a particularly active night at the shores of the glacial lake Jökulsárlón in Iceland I changed the position a bit to capture the action in the sky and as the intensity varied quite a lot, I also changed ISO. The top image is 20s at ISO 3200, the middle and bottom image are 20s at ISO 800. The 14mm lens was used wide open at f2.8 The Samyang lens had a hero's fate in Iceland. It was attached to my camera on top of a tripod. Iceland is a windy place and as the equipment was left unattended for a few seconds it was blown over. The lens took the hit and fell apart. The camera survived without a scratch.
  25. @vlaiv: Yes, indeed you can. It gives you a light pollution type filter on your luminance. What you cannot do is separating the channels, something what you can do on OSC - at least on a red (Ha+SII) and green+blue (OIII) and blue (Hb) level. There is another quality that separates these filters, aside from bandwith. And that is the effectiveness of anti-reflection coatings. I have used two of the filters mentioned and one of them has larger reflections around stars in my imaging system. It is difficult to give a general recommendation on this as you need to test filters in the same light train to compare (and also compare high speed systems with lower speed systems). Given adequate funds and a bit of time, this would be a worthwhile addition to the price/benefit comparisons.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.