Jump to content

Stargazers Lounge Uses Cookies

Like most websites, SGL uses cookies in order to deliver a secure, personalised service, to provide social media functions and to analyse our traffic. Continued use of SGL indicates your acceptance of our cookie policy.


Search the Community

Showing results for tags 'processing'.

More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


  • Welcome
    • Welcome
  • Beginners
    • Getting Started General Help and Advice
    • Getting Started Equipment Help and Advice
    • Getting Started With Observing
    • Getting Started With Imaging
  • Community
    • Official SGL Announcements and Events
    • StarGaZine
    • SGL Challenges and Competitions
    • SGL Star Parties
    • Star Parties & Astro Events
    • Celestial Events Heads Up
    • The Astro Lounge
  • Retailers
    • Sponsor Announcements and Offers
    • FLO Clearance Offers
    • IKI Observatory
    • Supplier Reviews
  • Astro Classifieds
    • For Sale / Swap
    • Wanted
  • Equipment
  • Observing
  • EEVA (Electronically Enhanced Visual Astronomy)
  • Imaging
  • Science
  • WADAS's WADAS Discussion Forum
  • Beaufort Club's Topics
  • Swindon Stargazers Club's Topics
  • East Midlands Stargazers''s Topics
  • Central Scotland Astro's Topics
  • SGL Cumbrian Skies's Topics
  • Herts, Beds and Bucks Group's Topics
  • SGL East Anglian Group's Topics
  • South Leicester Observers's Topics
  • South Wales Group's Topics
  • SGL Surrey Observers's Topics
  • South Yorkshire Stargazers's Topics
  • Yorkshire Astronomers's Topics
  • Devon and Cornwall's Topics
  • West Midlands's Topics
  • Essex Cloud Dodgers's Topics
  • Essex Cloud Dodgers's New equipment
  • NLO and Planetarium's Topics
  • Astronomical Society of Edinburgh's Discussion
  • Dorset Stargazers's Topics
  • Hairy Stars Club (Comets)'s Tutorials and Guides
  • Hairy Stars Club (Comets)'s General Discussion
  • Hairy Stars Club (Comets)'s Observing Campaigns
  • Hairy Stars Club (Comets)'s Analysis results
  • Hairy Stars Club (Comets)'s Useful Links
  • Pixinsight Users Club's Pixinsight Discussion Forum


  • Astro TV
  • Celestial Events
  • SGL Calendar
  • Astro Society Events
  • Star Parties
  • WADAS's Events
  • Beaufort Club's Events
  • Astronomical Society of Edinburgh's Events
  • Dorset Stargazers's Events


There are no results to display.

There are no results to display.

Find results in...

Find results that contain...

Date Created

  • Start


Last Updated

  • Start


Filter by number of...


  • Start





Website URL







Found 26 results

  1. Hi, I would like to share with you an article written by me on high-resolution solar imaging in different wavelengths. Glad that the European Physics Journal (EPJ) Web of Conferences published it. You can read it at: https://www.epj-conferences.org/articles/epjconf/abs/2020/16/epjconf_seaan2020_01002/epjconf_seaan2020_01002.html You can see the different layers of the Sun in high-resolution images using different setups. Best regards. Alfred
  2. Here in the United States, most of our children are on some type of remote learning program due to yada yada. During processing my images and helping out the kids, some mumblings of " the color purple would look neat." was heard over my shoulder...
  3. Hi, I'm working with some recent data of a new(ish) camera, and my Ha data is pretty good, but the oiii has these strange artifacts around the borders. If I use any kind of local normalization then it pretty much ruins most of the image, as you can see below.. First the Ha: Oiii with local normalization: oiii local normalization map, see how it fits with the artifacts: Oiii without any local normalization: These edge artifacts, could they be related to a bad filter or is it a processing artifact, has anyone seen anything similar? I am using Astro Pixel Processor for calibration and stacking.. Im kind of stumped on this one, and I am going to yank out the Oiii filter this weekend to have a looksie.
  4. Hi everyone, I've been spending some time processing over cloudy christmas and realised the thing I find most daunting, difficult and annoying is creating star masks. So my question is - is there a way of creating star masks (in Pixinsight preferably, but open to other ways!) which is (a) always accurate (b) relatively quick (c) repeatable? I've worked through LVA tutorials and looked at David Ault's technique. I also have the Bracken book to go through. Main techniques seem to be: 1. Stretch extracted lightness, clip low, bring down highlights, then use Star Mask process - very inconsistent results I get with this approach 2. Similar to above, but use MMT/MLT to remove nebulosity to create support image, then use different star scale Star Masks to capture all stars and then use pixelmath to put them all onto one image - very time consuming I find, also lots of noise setting fiddling I am very interested to see how people go about this and whether there are any neat tips and tricks to help the process! Thanks!
  5. Not quite sure what to put in the title here! I've been thinking about my future possible approaches to deep-sky imaging, especially looking at Emil Kraaikamp's approach in taking thousands of 1 sec images. The datasets this will produce are obviously huge - a single frame from my QHY163M is ~31.2MB in size. Even at 5 sec subs, the volumes of data are 22+GB/hr. Now, that's fine in theory to process (though it'll take a *long* time to chew through them, even on a 8-core i7!) - sufficient diskspace is practical, and I'd scrub the unreduced files once happy anyhow. However - long term storage is an issue. I couldn't keep all the subs here, even with poor UK weather, you're potentially looking at tens of terabytes of data in relatively short term, worse when backing it all up. Is there a feasible approach to keeping only stacked, reduced data (which is obviously much smaller) whereby you can still add to the data at a later point in time? I was thinking along the lines of: Take x hrs - save reduced, stacked data as a single linear frame. Later on (possibly years later!), take another y hrs of images - somehow combine with the x hrs before? (and repeat as needed) Could this be achieved and still allowing relevant pixel rejection, weightings, noise reduction, etc? Anyone have a PI workflow that allows this? I guess the same approach applies to standard CCD data, though the data savings are orders of magnitude less... Thanks!
  6. Can someone please give me some guidance on processing a luminance stack in PixInsight. I've looked at a number of web resources and Keller's book but I'm still a bit confused about what I might expect to achieve - what should the end result look like. Apologies for this being M101 again and the quality of the image but it was high in the sky and convenient for a short night with non-optimal seeing. The image comprises 19x300s subs (you know I don't know why I didn't take 20 with my OCD and all that) which I've calibrated with 50 each dark, bias and flats, registered and stacked in PixInsight. I've used STF to pull the image from the background and then saved as a jpg so you can view the image. It's taken with an Atik428ex mono on a SW80ED DS-Pro + NEQ6-Pro using SGP and PHD - focussed as best I could recognising the seeing conditions. I think I am getting the hang of processing osc rgb images in PI but I'm not sure of the best approach with a luminance image and what I might expect to achieve. I've included the fit and xsif file if somone wouldn't mind having a go and showing me what is achievable and what I should typically aim for - and how! I am keen to try to get to grips with the mechanics before those long dark nights set in - whenever that might be! As ever many thanks in anticipation of any help/guidance/advice. Adrian P.S. ImageAnalysis gives me the following stats on the image: Is this good, bad or indifferent? What is 'good' and what is 'bad' or 'in need of improvement'? Thank you M101-luminance.fit M101-Luminance.xisf
  7. Yeah, back again (sorry). But having sorted out my problems with my star halos caused by the settings in DSS, I have now had a go at combining two images to keep the core from burning out. Not going to win any awards but I am quite happy with it as my "first" proper Orion shot. I'm proud enough of it I sent a copy to my mum! Essentially: 27 of 29 60s ISO1600 shots stacked for the Nebula 30 of 30 20s ISO400 shots stacked for the Core Stacked in DSS with minimum processing to match the output TIF files. TIF Files opened as layers in GIMP, aligned and a layer mask attached to the top Nebula Layer. Used the paintbrush to blend the two layers and saved as one after a bit more processing in GIMP. I think the ISO1600 shots over-cooked it a bit, so there is some noise in the darker areas, but I'll just go back to ISO800 for the next ones. First image shows the output from the Nebula stack, complete with burnt out core And then the second image after blending the core As I said, it won't win any awards, but I am happy that I am making some progress. Next to try and eke out a little more for the subs and try and get them up to 90s and will keep trying to get more detail out of the images. So to everyone starting out like me.....keep at it, it is amazing how quickly you can progress in your knowledge of this astrophotography lark, so don't give up. Obviously the more you learn and the better you get, the more it is going to cost you, but the nights of banging your head against a brick wall of clouds, rain, bad subs, poor processing and general frustration are worth it when you finally get a picture that you like. And thank you for everyone for humouring and helping me this far. I'm bound to be back again and again for more questions but, hopefully, I'll be able to start helping others as well! Clear Skies!
  8. Hey SGL I was wondering in what order to processes DSO, and if different types of DSO require a different order. This is still basic, so 1 set of lights, single shots (aka not using filters to split colours), etc. Tutorials suggest: eg BudgetAstro DSS Align Colours Remove Vignetting (PS if no flats taken) Star Mask Levels Curves Match Colour / Reduce Noise / Sharpen I notice before I watched the BudgetAstro videos, my pictures turned out to be more vivid, when is Saturation done?
  9. Hey all, I made an acquisition and processing tutorial a while back (3 years ago? Yikes!) and it is fairly dated in terms of what I'm doing these days. I've been asked for a long time to make a new one showing what I'm doing these days. Specifically how I'm processing a single shot image for both the surface and prominences and how to process them together to show prominences and the surface at once. I've abandoned doing split images and composites and strictly work from one image using layers. Acquisition does not use gamma at all anymore. Nothing terribly fancy, but it's not exactly intuitive so hopefully this new video will illustrate most of the fundamentals to get you started. Instead of an hour, this time it's only 18 minutes. It's real time from start to finish. I'm sorry for the long "waiting periods" where I'm just waiting for the software to finish its routine, it lasts 1.5 minutes and 30 seconds tops typically at first. The first 4 minutes is literally just stacking & alignment in AS!3. I typically will go faster than this, but wanted to slow down enough to try to talk through what I'm doing as I do it. Hopefully you can see each action on the screen. I may have made a few mistakes or said a few incorrect things or terms, forgive me for that, this is not my day job. I really hope it helps folk get more into processing as its not difficult or intimidating when you see a simple process with only a few things that are used. The key is good data to begin with and a good exposure value. Today's data came from a 100mm F10 achromatic refractor and an ASI290MM camera with an HA filter. I used FireCapture to acquire the data with a defocused flat frame. No gamma is used. I target anywhere from 65% to 72% histogram fill. That's it! The processing is fast and simple. I have a few presets that I use, but they are all defaults in Photoshop. A lot of the numbers I use for parameters are based on image scale, so keep that in mind, experiment with your own values. The only preset I use that is not a default is my coloring scheme. I color with levels in Photoshop, and my values are Red: 1.6, Green 0.8, Blue 0.2 (these are mid-point values). Processing Tutorial Video (18 minutes): https://youtu.be/RJvJEoVS0oU RAW (.TIF) files available here to practice on (the same images you will see below as RAW TIFs): https://drive.google.com/open?id=1zjeoux7YPZpGjlRGtX6fH7CH2PhB-dzv Video for Acquisition, Focus, Flat Calibration and Exposure (20 minutes): (Please let me know if any links do not work) ++++++++++++++++++++++++++++++++++++++ ++++++++++++++++++++++++++++++++++++++ Results from today using this work flow method. Colored: B&W: SSM data (sampled during 1.5~2 arc-second seeing conditions): Equipment for today: 100mm F10 Frac (Omni XLT 120mm F8.3 masked to 4") Baader Red CCD-IR Block Filter (ERF) PST etalon + BF10mm ASI290MM SSM (for fun, no automation) Very best,
  10. This is maybe my 3rd attempt at a galaxy and I am trying to figure out the best way of doing it since I live in a red zone of London suburbs. I took this over 2 nights (well, 1.5 really, as my guiding wasn't working and plate solver wouldn't comply after meridian flip...) with my 8" EdgeHD SCT with 0.7x reducer and Atik 490X. Around 20 Luminance subs at 10 mins each (1x1) and around 15 RGB subs at 5 mins each, but binned 2x2. I use Astrodon filters but also have an LP filter permanently in my image train. Question: should I only use RGB and create a synthetic L channel, given LP, or continue trying with the actual luminance? Gradients are horrible with luminance but RGB doesn't have as much detail (only red filter seems to be more sensitive). My stars are all over the place (colours pop out everywhere, in the wrong way), how can I control this better? Also, as I wrote, the red filter seems to have much more detail than the rest and when I add all the channels into PS, the red colour just overpowers everything (and in general, how can I keep the star colours as they are and not have the red and blue go crazy - I am not sure the name for it, but it looks like chromatic aberration on steroids). Any other general tips would be great... Thank in advance. GFA PS: I cheated with the core: just changed the temperature to make it look a bit more glowy; for some reason, I barely had any yellow colour from the data...I will post stacked images, if of interest.
  11. This is a 25 minute stack I stacked in dss, but I can get anything out of it. I posted both the 32 bit tif, and the 16 bit tif.... Any pointers would be great. I have trouble with processing. Autosave001.tif long exposure.TIF
  12. Hi all, i just wanted to share a new Gimp 2.9 Beta for Mac. Its a bit work to google it (its not in the first three hits ) so i added a link. So far, i have not hit any bugs. The "big thing" is that you can work now with up to 64 bits loating point values for each colour value of a pixel. You can get it here https://www.dropbox.com/s/3xvttsjv00p0n8i/McGimp-2.9-cc.app.zip?dl=0 I have scanned the file with an up to date virus scanner- no hits. However, everything free comes with no guarantee
  13. Hi, Hoping for some help here.. Im trying to learn some basic processing in PixInsight, including the PreProcessing (calibration, registrering, stacking, etc.) Im using the only data I have, from my very first lights of M13, calibrated with a SUPER BIAS, no darks and flats yet. When doing it in DSS I get 7 images stacked and the stacked file looks OK. Switching to PixInsight it goes horribly wrong after stacking, and the stacked picture looks like the attached. I have done the whole process in two different ways, but get the same basic result. 1. Doing all the steps manually: Calibration, Debayering, SubFrameSelection, StarAlignment, Stacking. 2. Doing everyting with the BatchPreprocessing script I yield the same result, its like the subs aren't properly aligned, even though the process has eliminated A LOT of bad frames (from 26 to 5). Link to zip with the 5 registered light: https://www.dropbox.com/s/sj2ytaa00u5qlap/GREAT CLUSTER_LIGHT_45s_400iso.zip?dl=0 Hope someone can give me some pointers, none of the tutorials I have been following explains why this can happen. Thanks in advance stargazers!
  14. Just getting to grips with processing after DSS. Can't afford PS so please resist the usual "PS is better than GIMP" comments, I already know; it doesn't make PS any more affordable. Looking for a good, easy to understand tutorial on using gimp to get the best image from the DSS tiff file starting point. Have found a few but seem to skip steps or assume some knowledge...really am looking for an idiot's guide...thanks!
  15. Fans of huge mosaics and long time-lapses might be interested in ImPP (Image Post-Processor), which I wrote to automate sharpening of stacks (no more clicking 100+ times in Registax to load and apply a wavelet preset). The tool performs Lucy-Richardson deconvolution, histogram stretch, gamma correction and final blurring/sharpening via unsharp masking, in this order (all steps are opional). I've posted a more detailed description here: http://solarchat.natca.net/viewtopic.php?f=4&t=13927 ImPP is free, open-source and licensed under GNU GPL v. 3 (or later). Download Windows version (including a simple graphical interface) and source code from: http://stargazerslounge.com/blog/1400/entry-1779-impp-image-post-processor/
  16. NOTE: This program has been superseded by ImPPG, which includes all the below functionality in a full-fledged GUI. Image Post-Processor performs Lucy-Richardson deconvolution, histogram stretch, gamma correction and unsharp masking. Both a command-line tool and a graphical user interface (GUI) wrapper for processing of multiple files are available. Discussion threads: http://stargazerslounge.com/topic/230841-impp-%E2%80%93-batch-lucy-richardson-deconvolution-and-more-of-stacks/ http://solarchat.natca.net/viewtopic.php?f=4&t=13927 Building from source code (C++) requires Boost and wxWidgets (for the GUI) libraries and is possible on multiple platforms. Files: impp.zip: Windows command-line program, GUI and source code impp-src.zip: source code of the command-line tool impp-gui-src.zip: source code of the GUI If you are not sure what you need: download the latest impp.zip, unpack it and run impp-gui.exe. impp: version 0.3 (2014-11-30) New features: – better performance for large L-R "sigma" thanks to Young & van Vliet recursive convolution (Y&vV can be also forced by "--conv yvv")impp-gui: version 0.1.2 (2014-11-30) Bug fixes: - fixed output directory sometimes having the deepest sub-folder appearing twiceimpp.zip impp-src.zip impp-gui-src.zip impp: version 0.2 (2014-11-27) New features: – L-R deconvolution performance improved by 24%-40%, depending on L-R "sigma" and num. of threadsimpp.zip impp-src.zip impp: version 0.1.1 (2014-11-26) New features: – measuring time of L-R deconvolution – slightly improved performance – added MS C++ makefile Bug fixes: – fixed code that would crash with OpenMP with some compilers impp-gui: version 0.1.1 (2014-11-26) New features: – added MS C++ makefile Bug fixes: – fixed setting of output file name's extension impp.zip impp-src.zip impp-gui-src.zip Version 0.1 (2014-11-21) impp.zip impp-src.zip impp-gui-src.zip
  17. Hi All I'm playing around with layers in GIMP as I had no idea how to use them two days ago, but am now fairly comfortable loading a couple of images, aligning them, adding a layer mask and using the paintbrush tool to allow one to come through to the other. The plan is to have a go with some long and some short images of Orion and combine the two final stacked images to get the nebula and the core. I need to work on getting the two stacked images' "brightness" levels more similar as the ones I am playing with are a bit different and when using the paintbrush tool to bring the rear image "out" the blend isn't very subtle. Am I working on the right principle here, or is there a better way to merge or blend in GIMP? I've tried using those controls, but they don't seem to change anything, so I suspect I need to so more work in that area. If anyone has any favourite tutorials that they use and are willing to share that would be great. Cheers!
  18. Nice startools tutorial if it`s of use to anyone thinking of playing around with startools http://astro.ecuadors.net/processing-a-noisy-dslr-image-stack-with-startools/
  19. Hi, As there are many talented imagers here, and i'm struggling with the unfamiliar techniques involved It was suggested I post a thread and hope that anyone can help with the issues I seem to be at a loss on. Apart from inexpensive lenses, the coma and CA I seem to be getting not only vignetting in the corners but a kind of almost 'dirty' area in the middle like vignetting but with a strange colour to it like the whole middle area is not quite stacking right, or its during capture that the problem starts. Also, processing so many stars is overwhelming either the software or me or both, although i'm sort of sure the data is saveable but the processing ive tried so far just seems to make it more messy. Apologies in advance for the bad explanations and babbling, pls be patient with the text in this post. this is double cluster area, the big reddish blotch like a severe vignette - it looks to me like it stacked badly and the centre looks thin and flattened. this looks like gravely snow which got trampled by muddy footprints with ill-defined dark lanes or something. again with the reddish murky look where the 'tracks' are. Sorry for weird analogies but the technical terms escape me. some more vignette type issue. reddish muddy look and the appearance of uneven stacking also in the middle bit. I tried flats but not sure they were done right and the images with flats dont seem any better. These aren't the only problems but i'm not really certain what i'm doing right or wrong to be honest, i'm hoping you folks can shed some light on it. Any help appreciated, and many thanks for looking. Regards Aenima
  20. Does anyone use PixInsight for processing Lunar images? If so, do you have any good pointers or tutorials you can share? I am currently working through http://pixinsight.com/examples/deconvolution/moon/en.html and I am getting some good results (in my opinion from my very limited experience at Lunar processing) compared to what I could do in Registax. I am still at the trial and error stage but I think I am getting the settings I need nailed. I just need to switch to a faster computer 700 Regularized Van Cittert iterations takes a long time to run. I may post some examples later if anyone is interested.
  21. Hi, Thinking of trying to get my hands on something a little better than GIMP for my Astro Image Processing (something that supports 16 bit images). (and basically Adobe Photoshop is too expensive for me) I expect this has been discussed many times, but does anyone use Cyberlink PhotoDirector 9 (£47.99), or Serif Affinity Photo for Windows (£48.99) for their post DSS Image Processing? The reviews on the above programs are very good indeed on the internet (ie) Tech Radar I look forward to any recommendations or advice from those that might know please. Regards Steve
  22. Hey all, I took some really rough images of M42 the other night, alignment and focus was by eye, heavily light polluted, no calibration frames and I have a dusty corrector plate. However, this was the first time I have shot a deep sky object and for how rough it was I was pleased, see attached. I watched a lot of DSS tutorial videos last night and I decided to stack the 16 images I have just to see what I get. Now, I am under no delusions as to the expected quality of the final image however, stacking made all the stars vanish and the overall quality of the image was less that that of a single frame. Any suggestions? I do not have an example image saved, sorry. Thank you.
  23. Finally activated my trial of Pixinsight yesterday - think I have been a bit scared of the software because of it's reputation - but, with a few hours watching Harry's AstroShed Beginner's tutorials I was quite impressed with the difference it made to a few subs from the other night - just 16x420s subs (yep, cloud cut the session short) of M51 and no darks, so I wasn't expecting miracles and, to be honest, after getting up this morning, I can't really remember how I did any of it, but I will work through the same tutorials again and again until it all makes sense. It still needs a bit more colour, and I'm not giving anyone a run for their money, but the difference is....well, astronomical. Can't wait to try it on more and better subs....or maybe reprocessing some old images while the trial lasts. Interestingly, now, I think the justification I need is probably not about spending the money on Pixinsight, but whether I can justify NOT buying it, if I want to carry on with imaging with any seriousness. So, yeah, Pixinsight does seem to make a massive difference and I suspect it is well worth it's price. Anyone who has been avoiding it, like me, should just give the trial a go, exercise some patience, and work through Harry's videos - excellent guides. So, first image is 7x420s stacked in DSS and processed in Photoshop and is a little purple.... The second is the same image registered/stacked and processed in Pixinsight, using Harry's guides..... I think the difference is significant, and will be interested to see how things improve further once I master the beginner processes and move on the the harder stuff.....
  24. I had a successful session the night before last - my first with guiding using the ASi120MM on my ST80, atop my ED80 through which I imaged with my unmodified Canon 1100D and focal reducer. I was pleased with this image using 13 x 300 second lights with darks and flat frames: Then last night I thought I would try M51 and things have not gone so well. However I only took 6 x 400 second lights, 3 darks and used the previous flats as still ISO800. This was my first image out of DSS with a star detection of 50 stars: There appears to be a distinct lack of colour, I did moved the right hand RGB/K pointers for blue and green to the left a bit as I am sure someone once said make the peaks a bit fatter but I think that now a BAD idea. So I tried again without flats and the colour came back(?): But very noisy - but there is the colour! And I tried a third time this time including flats but set at "average" not "median", star detection 47 stars and got another dull image: This is my image from last year which was an unguided attempt using 30 x 70 second lights: Now I guess my questions are as follows: 1. Please confirm I should take a LOT more lights? 2. Should I reduce exposure down a lot from 400 seconds to reduce noise on a fairly moonlit night? 3. Where did the colour go? 4. Why did the image look better when I DIDNT use flats? 5. Any other advice please! ps - already saving up for a weeks training at Olly's place hopefully next year!
  25. I'm not experienced with LRGB imaging, so thought i'd give it a go on M81. However, when i combine the 4 individually processed integrations i end up with horrible colour hues across the image - they're all aligned and wotnot. Am i running into the issues of light pollution (inside the M25), which i can only remove with aggressive DBE application? Individual files attached.
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.