Search the Community
Showing results for tags 'processing'.
-
Hi all, I got to briefly test out my new modded EOS 1300d last night and had a quick question about the stacked output. Im fairly adept at stock DSLR processing, and dealing with the usual green image. Sometimes i align the RGB levels in Photoshop and other times i use Pixinsight to deal with it when i have access to it. If i align the channels in Photoshop, will that negate any benefit from using the modded cam to record more red? If so, then would a channel extraction, then a linear fit to the lowest median channel in pixinsight be the best starting point? Also, assuming im stacking in DSS, should i turn on "align RGB channels" on, on the stacking options tab? I haven tried siril or sequator yet, but i assume they will also output a red stacked image. I have access to most of the software people use. Lightroom, pixinsight (sometimes), photoshop, raw therapee, siril, DSS etc. So, my query, i guess, is 3 fold: - Will red channel data be lost if i align the RGB channels before i start processing the usual way? - Would a channel extraction and linear fit, then channel combination, be a good start in Pixinsight? - Would turning on "align RGB channels" option in DSS stacking preferences make using a modded camera moot? I know i can try all these myself, but i have so little free time to work on my AP images, i figured asking you fine folk would wield better answers. in case its needed, image info Canon EOS 1300D modded image is cropped heavily ISO 800 100 x 50s lights Nikon 50mm F2.8, stopped down to 3.5 star adventurer 2i B6 location Greater London edge Ive also attached a VERY VERY rough processing result i got last night, (different crop) just to roughly see what i got. I know that the Heart and Soul really require at least 3-4 hours, which i will do next clear sky and using my 150mm lens thank you all in advance
-
- astromodified
- modded dslr
-
(and 1 more)
Tagged with:
-
NOTE: This program has been superseded by ImPPG, which includes all the below functionality in a full-fledged GUI. Image Post-Processor performs Lucy-Richardson deconvolution, histogram stretch, gamma correction and unsharp masking. Both a command-line tool and a graphical user interface (GUI) wrapper for processing of multiple files are available. Discussion threads: http://stargazerslounge.com/topic/230841-impp-%E2%80%93-batch-lucy-richardson-deconvolution-and-more-of-stacks/ http://solarchat.natca.net/viewtopic.php?f=4&t=13927 Building from source code (C++) requires Boost and wxWidgets (for the GUI) libraries and is possible on multiple platforms. Files: impp.zip: Windows command-line program, GUI and source code impp-src.zip: source code of the command-line tool impp-gui-src.zip: source code of the GUI If you are not sure what you need: download the latest impp.zip, unpack it and run impp-gui.exe. impp: version 0.3 (2014-11-30) New features: – better performance for large L-R "sigma" thanks to Young & van Vliet recursive convolution (Y&vV can be also forced by "--conv yvv")impp-gui: version 0.1.2 (2014-11-30) Bug fixes: - fixed output directory sometimes having the deepest sub-folder appearing twiceimpp.zip impp-src.zip impp-gui-src.zip impp: version 0.2 (2014-11-27) New features: – L-R deconvolution performance improved by 24%-40%, depending on L-R "sigma" and num. of threadsimpp.zip impp-src.zip impp: version 0.1.1 (2014-11-26) New features: – measuring time of L-R deconvolution – slightly improved performance – added MS C++ makefile Bug fixes: – fixed code that would crash with OpenMP with some compilers impp-gui: version 0.1.1 (2014-11-26) New features: – added MS C++ makefile Bug fixes: – fixed setting of output file name's extension impp.zip impp-src.zip impp-gui-src.zip Version 0.1 (2014-11-21) impp.zip impp-src.zip impp-gui-src.zip
- 3 comments
-
- lucy-richardson
- sharpening
-
(and 2 more)
Tagged with:
-
Startools 1.8 is currently under development, Ivo is currently working a Narrowband Accent" module for duo band users , initial image Ivo has posted certainly looks interesting https://forum.startools.org/viewtopic.php?f=4&t=2225&start=10 Ivo also working on a new deconvolution algorithm so some good things for Startools users to look forward too .
-
Gear Used: - Nikon D3300 - Celestron Astromaster 130EQ - T-ring + Barlow x2 I will explain this as briefly as possible. Because I cannot photograph in direct focus with my telescope and camera, I have to use the x2. when there is a full moon, it does not enter the frame, it comes out around the edges, so I think I should make a mosaic. The problem is that I do not have a motorized mount, so anileation is critical. The problem with PIPP is that it does not correct the rotation of the moon, so it does not work for me. I aligned the photos in Photoshop, and exported them in tiff. I then moved them to Autostakkert! 3 and it stacked without any problem. Use 40% of the best frames. Finally, I use RegiStax6, and when using the waveletes, as seen in the image, it generates an incredibly loud noise. I don't understand what I'm doing wrong, is it RegiStax, is it AS! 3 or is it Photoshop. (Errors are clearly seen in the lower part of the product because I do not have a mount, but it has happened to me in other images and the same does not happen with the noise.) I do not understand and I am desperate as it is horrible not being able to photograph a full moon decently. I urgently need someone to help me. I put all the general details for you to understand, if you need to know more, please let me know. a greeting. (below demonstrative image)
- 1 reply
-
- processing
- processing problems
-
(and 3 more)
Tagged with:
-
Hey all, I made an acquisition and processing tutorial a while back (3 years ago? Yikes!) and it is fairly dated in terms of what I'm doing these days. I've been asked for a long time to make a new one showing what I'm doing these days. Specifically how I'm processing a single shot image for both the surface and prominences and how to process them together to show prominences and the surface at once. I've abandoned doing split images and composites and strictly work from one image using layers. Acquisition does not use gamma at all anymore. Nothing terribly fancy, but it's not exactly intuitive so hopefully this new video will illustrate most of the fundamentals to get you started. Instead of an hour, this time it's only 18 minutes. It's real time from start to finish. I'm sorry for the long "waiting periods" where I'm just waiting for the software to finish its routine, it lasts 1.5 minutes and 30 seconds tops typically at first. The first 4 minutes is literally just stacking & alignment in AS!3. I typically will go faster than this, but wanted to slow down enough to try to talk through what I'm doing as I do it. Hopefully you can see each action on the screen. I may have made a few mistakes or said a few incorrect things or terms, forgive me for that, this is not my day job. I really hope it helps folk get more into processing as its not difficult or intimidating when you see a simple process with only a few things that are used. The key is good data to begin with and a good exposure value. Today's data came from a 100mm F10 achromatic refractor and an ASI290MM camera with an HA filter. I used FireCapture to acquire the data with a defocused flat frame. No gamma is used. I target anywhere from 65% to 72% histogram fill. That's it! The processing is fast and simple. I have a few presets that I use, but they are all defaults in Photoshop. A lot of the numbers I use for parameters are based on image scale, so keep that in mind, experiment with your own values. The only preset I use that is not a default is my coloring scheme. I color with levels in Photoshop, and my values are Red: 1.6, Green 0.8, Blue 0.2 (these are mid-point values). Processing Tutorial Video (18 minutes): https://youtu.be/RJvJEoVS0oU RAW (.TIF) files available here to practice on (the same images you will see below as RAW TIFs): https://drive.google.com/open?id=1zjeoux7YPZpGjlRGtX6fH7CH2PhB-dzv Video for Acquisition, Focus, Flat Calibration and Exposure (20 minutes): (Please let me know if any links do not work) ++++++++++++++++++++++++++++++++++++++ ++++++++++++++++++++++++++++++++++++++ Results from today using this work flow method. Colored: B&W: SSM data (sampled during 1.5~2 arc-second seeing conditions): Equipment for today: 100mm F10 Frac (Omni XLT 120mm F8.3 masked to 4") Baader Red CCD-IR Block Filter (ERF) PST etalon + BF10mm ASI290MM SSM (for fun, no automation) Very best,
- 23 replies
-
- 16
-
-
-
- astrophotography
- ha
- (and 8 more)
-
Here in the United States, most of our children are on some type of remote learning program due to yada yada. During processing my images and helping out the kids, some mumblings of " the color purple would look neat." was heard over my shoulder...
- 3 replies
-
- 6
-
-
- color
- processing
-
(and 1 more)
Tagged with:
-
Hi, I would like to share with you an article written by me on high-resolution solar imaging in different wavelengths. Glad that the European Physics Journal (EPJ) Web of Conferences published it. You can read it at: https://www.epj-conferences.org/articles/epjconf/abs/2020/16/epjconf_seaan2020_01002/epjconf_seaan2020_01002.html You can see the different layers of the Sun in high-resolution images using different setups. Best regards. Alfred
- 2 replies
-
- 6
-
-
-
- solar imaging
- sun spot
-
(and 6 more)
Tagged with:
-
processing 99% Moon Mosaic
Adaaam75 posted a topic in Imaging - Image Processing, Help and Techniques
Hi guys, So on Tuesday night I spent the evening imaging the moon with my Celestron 9.25 SCT and DSLR as shown below and am happy with the images I collated (I only took images as I did intend to take capture video but I got carried away and time was getting on). I have approx 350 jpeg (and the equivalent in raw) images and have used the Microsoft image composite editor to stitch the frames together without editing them first etc but I wonder if I'm going the images an injustice? I'm not ready to pay for editing software as I know there is a lot of very good free downloads out there and I'm asking for recommendations. Should I be processing the frames before/after stitching and what software would you recommend? Please offer any advice you have. I will post the resulting image once I'm happy with the outcome Thanks in advance, -
I'm not experienced with LRGB imaging, so thought i'd give it a go on M81. However, when i combine the 4 individually processed integrations i end up with horrible colour hues across the image - they're all aligned and wotnot. Am i running into the issues of light pollution (inside the M25), which i can only remove with aggressive DBE application? Individual files attached.
- 3 replies
-
- pixinsight
- light pollution
-
(and 1 more)
Tagged with:
-
Hey everyone, I was out recently in what felt like the first clear sky in years and got ~109 min of data on M31, minus 76 frames due to a 12mph wind, which left me with 69 min of data (each shot is 45 sec with ISO 200 tracked with skywatcher star adventurer). As mentioned in the title I captured all these images in a bortal 8 location, used an unmodified canon eos 400d and the skywatcher 75ed as the scope (with a flattener). I've attached my edit (warning: it is not great at all + slightly overedited to see what details are even there), and to be my surprise it looked very similar to an image of M31 with only 20 min of data which i captured a month earlier (both of which i used DSS and photoshop for). Now this may well have something to do with the way i edited it in photoshop or a different setting in DSS or just the fact that 49 more data doesnt make much of a difference considering im in a bortal 8 location, maybe you guys could help on that. I've attached the link to the original files (in the folder called 18.2.2021) as well as the stacked image from DSS (https://drive.google.com/drive/folders/12NT4TmLCXvTfOXNPE_l8UWPRpgO2VjLe?usp=sharing). I didnt capture any flat images but have dark and bias frames, all in their correpsonding folders in the attached link. It would be greatly appreciated if you guys could see if there is more data in this then i have managed to 'extract' using photoshop. (If you use different software and try and edit these files please tell me what you used) If there isn't then maybe do you guys have any images of M31 (or similar) from very light polluted skies that you could share here? (If so i would if you could share the full exposure time and gear that would be great) Many Thanks!
- 12 replies
-
Hey everyone. Have previously stumbled across this forum when searching for answers to questions, have finally made an account. Last night I shot the moon for a couple of hours. I took around 10x3 minute videos and captured a little over 80,000 frames. My aim was to then create a lunar mosaic image but I have never done this before, and my technical ability seems to be adding to the confusion. So to give some context, I used an ASI120MCS planetary camera through an 8" Skywatcher Skyliner 200p dobsonian. I have read that ideally you would use a tracking mount to record sections of the moon at a time, however I sadly don't have that luxury. I instead let the moon drift across the field of view and I'm pretty confident that among the 80,000 frames I have all the pieces of the moon as a whole. What I'm now having issues with is how to break down these Avi files into frames which then can be used to create a mosaic. I need a "for dummies" guide ae thats what I'm feeling like currently. Thank you in advance for any assistance you may be able to provide. :)
-
Hi, I'm working with some recent data of a new(ish) camera, and my Ha data is pretty good, but the oiii has these strange artifacts around the borders. If I use any kind of local normalization then it pretty much ruins most of the image, as you can see below.. First the Ha: Oiii with local normalization: oiii local normalization map, see how it fits with the artifacts: Oiii without any local normalization: These edge artifacts, could they be related to a bad filter or is it a processing artifact, has anyone seen anything similar? I am using Astro Pixel Processor for calibration and stacking.. Im kind of stumped on this one, and I am going to yank out the Oiii filter this weekend to have a looksie.
- 5 replies
-
- processing
- filter
-
(and 2 more)
Tagged with:
-
Hi everyone, I've been spending some time processing over cloudy christmas and realised the thing I find most daunting, difficult and annoying is creating star masks. So my question is - is there a way of creating star masks (in Pixinsight preferably, but open to other ways!) which is (a) always accurate (b) relatively quick (c) repeatable? I've worked through LVA tutorials and looked at David Ault's technique. I also have the Bracken book to go through. Main techniques seem to be: 1. Stretch extracted lightness, clip low, bring down highlights, then use Star Mask process - very inconsistent results I get with this approach 2. Similar to above, but use MMT/MLT to remove nebulosity to create support image, then use different star scale Star Masks to capture all stars and then use pixelmath to put them all onto one image - very time consuming I find, also lots of noise setting fiddling I am very interested to see how people go about this and whether there are any neat tips and tricks to help the process! Thanks!
-
Hey all, I took some really rough images of M42 the other night, alignment and focus was by eye, heavily light polluted, no calibration frames and I have a dusty corrector plate. However, this was the first time I have shot a deep sky object and for how rough it was I was pleased, see attached. I watched a lot of DSS tutorial videos last night and I decided to stack the 16 images I have just to see what I get. Now, I am under no delusions as to the expected quality of the final image however, stacking made all the stars vanish and the overall quality of the image was less that that of a single frame. Any suggestions? I do not have an example image saved, sorry. Thank you.
-
This is maybe my 3rd attempt at a galaxy and I am trying to figure out the best way of doing it since I live in a red zone of London suburbs. I took this over 2 nights (well, 1.5 really, as my guiding wasn't working and plate solver wouldn't comply after meridian flip...) with my 8" EdgeHD SCT with 0.7x reducer and Atik 490X. Around 20 Luminance subs at 10 mins each (1x1) and around 15 RGB subs at 5 mins each, but binned 2x2. I use Astrodon filters but also have an LP filter permanently in my image train. Question: should I only use RGB and create a synthetic L channel, given LP, or continue trying with the actual luminance? Gradients are horrible with luminance but RGB doesn't have as much detail (only red filter seems to be more sensitive). My stars are all over the place (colours pop out everywhere, in the wrong way), how can I control this better? Also, as I wrote, the red filter seems to have much more detail than the rest and when I add all the channels into PS, the red colour just overpowers everything (and in general, how can I keep the star colours as they are and not have the red and blue go crazy - I am not sure the name for it, but it looks like chromatic aberration on steroids). Any other general tips would be great... Thank in advance. GFA PS: I cheated with the core: just changed the temperature to make it look a bit more glowy; for some reason, I barely had any yellow colour from the data...I will post stacked images, if of interest.
- 10 replies
-
Not quite sure what to put in the title here! I've been thinking about my future possible approaches to deep-sky imaging, especially looking at Emil Kraaikamp's approach in taking thousands of 1 sec images. The datasets this will produce are obviously huge - a single frame from my QHY163M is ~31.2MB in size. Even at 5 sec subs, the volumes of data are 22+GB/hr. Now, that's fine in theory to process (though it'll take a *long* time to chew through them, even on a 8-core i7!) - sufficient diskspace is practical, and I'd scrub the unreduced files once happy anyhow. However - long term storage is an issue. I couldn't keep all the subs here, even with poor UK weather, you're potentially looking at tens of terabytes of data in relatively short term, worse when backing it all up. Is there a feasible approach to keeping only stacked, reduced data (which is obviously much smaller) whereby you can still add to the data at a later point in time? I was thinking along the lines of: Take x hrs - save reduced, stacked data as a single linear frame. Later on (possibly years later!), take another y hrs of images - somehow combine with the x hrs before? (and repeat as needed) Could this be achieved and still allowing relevant pixel rejection, weightings, noise reduction, etc? Anyone have a PI workflow that allows this? I guess the same approach applies to standard CCD data, though the data savings are orders of magnitude less... Thanks!
- 5 replies
-
- lucky imaging
- processing
-
(and 1 more)
Tagged with:
-
Hi, Thinking of trying to get my hands on something a little better than GIMP for my Astro Image Processing (something that supports 16 bit images). (and basically Adobe Photoshop is too expensive for me) I expect this has been discussed many times, but does anyone use Cyberlink PhotoDirector 9 (£47.99), or Serif Affinity Photo for Windows (£48.99) for their post DSS Image Processing? The reviews on the above programs are very good indeed on the internet (ie) Tech Radar I look forward to any recommendations or advice from those that might know please. Regards Steve
- 31 replies
-
- software
- processing
-
(and 2 more)
Tagged with:
-
Can someone please give me some guidance on processing a luminance stack in PixInsight. I've looked at a number of web resources and Keller's book but I'm still a bit confused about what I might expect to achieve - what should the end result look like. Apologies for this being M101 again and the quality of the image but it was high in the sky and convenient for a short night with non-optimal seeing. The image comprises 19x300s subs (you know I don't know why I didn't take 20 with my OCD and all that) which I've calibrated with 50 each dark, bias and flats, registered and stacked in PixInsight. I've used STF to pull the image from the background and then saved as a jpg so you can view the image. It's taken with an Atik428ex mono on a SW80ED DS-Pro + NEQ6-Pro using SGP and PHD - focussed as best I could recognising the seeing conditions. I think I am getting the hang of processing osc rgb images in PI but I'm not sure of the best approach with a luminance image and what I might expect to achieve. I've included the fit and xsif file if somone wouldn't mind having a go and showing me what is achievable and what I should typically aim for - and how! I am keen to try to get to grips with the mechanics before those long dark nights set in - whenever that might be! As ever many thanks in anticipation of any help/guidance/advice. Adrian P.S. ImageAnalysis gives me the following stats on the image: Is this good, bad or indifferent? What is 'good' and what is 'bad' or 'in need of improvement'? Thank you M101-luminance.fit M101-Luminance.xisf
- 20 replies
-
- 1
-
-
- pixinsight
- luminance
-
(and 1 more)
Tagged with:
-
Hi, Hoping for some help here.. Im trying to learn some basic processing in PixInsight, including the PreProcessing (calibration, registrering, stacking, etc.) Im using the only data I have, from my very first lights of M13, calibrated with a SUPER BIAS, no darks and flats yet. When doing it in DSS I get 7 images stacked and the stacked file looks OK. Switching to PixInsight it goes horribly wrong after stacking, and the stacked picture looks like the attached. I have done the whole process in two different ways, but get the same basic result. 1. Doing all the steps manually: Calibration, Debayering, SubFrameSelection, StarAlignment, Stacking. 2. Doing everyting with the BatchPreprocessing script I yield the same result, its like the subs aren't properly aligned, even though the process has eliminated A LOT of bad frames (from 26 to 5). Link to zip with the 5 registered light: https://www.dropbox.com/s/sj2ytaa00u5qlap/GREAT CLUSTER_LIGHT_45s_400iso.zip?dl=0 Hope someone can give me some pointers, none of the tutorials I have been following explains why this can happen. Thanks in advance stargazers!
- 4 replies
-
- pixinsight
- stacking
-
(and 1 more)
Tagged with:
-
Finally activated my trial of Pixinsight yesterday - think I have been a bit scared of the software because of it's reputation - but, with a few hours watching Harry's AstroShed Beginner's tutorials I was quite impressed with the difference it made to a few subs from the other night - just 16x420s subs (yep, cloud cut the session short) of M51 and no darks, so I wasn't expecting miracles and, to be honest, after getting up this morning, I can't really remember how I did any of it, but I will work through the same tutorials again and again until it all makes sense. It still needs a bit more colour, and I'm not giving anyone a run for their money, but the difference is....well, astronomical. Can't wait to try it on more and better subs....or maybe reprocessing some old images while the trial lasts. Interestingly, now, I think the justification I need is probably not about spending the money on Pixinsight, but whether I can justify NOT buying it, if I want to carry on with imaging with any seriousness. So, yeah, Pixinsight does seem to make a massive difference and I suspect it is well worth it's price. Anyone who has been avoiding it, like me, should just give the trial a go, exercise some patience, and work through Harry's videos - excellent guides. So, first image is 7x420s stacked in DSS and processed in Photoshop and is a little purple.... The second is the same image registered/stacked and processed in Pixinsight, using Harry's guides..... I think the difference is significant, and will be interested to see how things improve further once I master the beginner processes and move on the the harder stuff.....
-
Nice startools tutorial if it`s of use to anyone thinking of playing around with startools http://astro.ecuadors.net/processing-a-noisy-dslr-image-stack-with-startools/
-
Yeah, back again (sorry). But having sorted out my problems with my star halos caused by the settings in DSS, I have now had a go at combining two images to keep the core from burning out. Not going to win any awards but I am quite happy with it as my "first" proper Orion shot. I'm proud enough of it I sent a copy to my mum! Essentially: 27 of 29 60s ISO1600 shots stacked for the Nebula 30 of 30 20s ISO400 shots stacked for the Core Stacked in DSS with minimum processing to match the output TIF files. TIF Files opened as layers in GIMP, aligned and a layer mask attached to the top Nebula Layer. Used the paintbrush to blend the two layers and saved as one after a bit more processing in GIMP. I think the ISO1600 shots over-cooked it a bit, so there is some noise in the darker areas, but I'll just go back to ISO800 for the next ones. First image shows the output from the Nebula stack, complete with burnt out core And then the second image after blending the core As I said, it won't win any awards, but I am happy that I am making some progress. Next to try and eke out a little more for the subs and try and get them up to 90s and will keep trying to get more detail out of the images. So to everyone starting out like me.....keep at it, it is amazing how quickly you can progress in your knowledge of this astrophotography lark, so don't give up. Obviously the more you learn and the better you get, the more it is going to cost you, but the nights of banging your head against a brick wall of clouds, rain, bad subs, poor processing and general frustration are worth it when you finally get a picture that you like. And thank you for everyone for humouring and helping me this far. I'm bound to be back again and again for more questions but, hopefully, I'll be able to start helping others as well! Clear Skies!
-
Hi All I'm playing around with layers in GIMP as I had no idea how to use them two days ago, but am now fairly comfortable loading a couple of images, aligning them, adding a layer mask and using the paintbrush tool to allow one to come through to the other. The plan is to have a go with some long and some short images of Orion and combine the two final stacked images to get the nebula and the core. I need to work on getting the two stacked images' "brightness" levels more similar as the ones I am playing with are a bit different and when using the paintbrush tool to bring the rear image "out" the blend isn't very subtle. Am I working on the right principle here, or is there a better way to merge or blend in GIMP? I've tried using those controls, but they don't seem to change anything, so I suspect I need to so more work in that area. If anyone has any favourite tutorials that they use and are willing to share that would be great. Cheers!
-
Just getting to grips with processing after DSS. Can't afford PS so please resist the usual "PS is better than GIMP" comments, I already know; it doesn't make PS any more affordable. Looking for a good, easy to understand tutorial on using gimp to get the best image from the DSS tiff file starting point. Have found a few but seem to skip steps or assume some knowledge...really am looking for an idiot's guide...thanks!