Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

GIMP Processing Tutorial for beginners


Chris

Recommended Posts

I think GIMP is great for people starting out with image processing as it's absolutely free and simple to use. I'm by no means a master at processing images but I think I know enough to give beginners a leg up. Here's a beginner friendly tutorial I put together, I could have done a better job on the core but the key steps are there:

If you want to follow along I've placed a master stacked tiff in the video description on Youtube. 

 

 

 

 

 

  • Like 5
  • Thanks 7
Link to comment
Share on other sites

Good idea.  It's expensive enough as it is getting started in imaging.  Being able to save a bit of cash when starting out by using GIMP instead of the paid-for alternatives has got to be helpful.

James

  • Like 2
Link to comment
Share on other sites

Hope you don't mind some criticism of the process, but i have to comment. By the way i like the video as content and your videos in general, but as a tutorial this one sets bad examples (IMO). Noticed a few glaring issues right away that in my opinion should not be told to beginners, dont get me wrong its easy to understand and better than going around processing blind and with no instructions, but better alternatives are not more difficult to do! These things took me unnecessarily long to unlearn when i finally jumped to SIRIL (free!) to do the linear part processing. The final part that you mentioned in the video of processing always turning up different is born of these bad practices, and goes away with a few easy methods done early on.

The problem with processing entirely in "normal" processing software is that you need to stretch the image first to see anything, when ideally a lot of the processing is done while the data is still linear. In Siril you can do the first steps: crop artifacts out, background extraction (gradient removal attempts), color balance (important in linear phase to have as much precision as possible) and then stretch the image. From that point onwards processing in Gimp or any other "normal" software is a piece of cake. Its hard to do the first steps wrong in Siril, but very easy in Gimp, which is why you have different results every time. Playing with levels and curves early on is also not a good idea and should be done on the almost finished image as a final touch up.

First of all, this example shot sort of works because you have a supernaturally well visible picture straight out of the stacker! Personally i have never seen anything but the few brightest stars straight out of DSS and people who have similar almost black stacks will not be able to work with this tutorial. If you had a typical black/gray/brown image out of DSS the processing would not work at all because: The stack out of DSS in your case is in 16bit, which results in poor precision and separated values of pixels like shown in the first levels stretch section at 2 minutes onwards (the spikes in histogram values). This detail is unrecoverable and will result in a "choppy" final image as there is no precision to draw detail from. In your video you can clearly see the overblown parts do not smoothly transition to the less exposed parts but have clear separations of pixel values. The stack should always be in 32bit, there are no downsides! I am not Vlaiv but i think i have read enough of his comments to catch the 32bit good, 16bit bad vibe 😁. By the way it does not matter what bitrate the capture camera was, stacking multiple subs increases bitrate and 16bit is just not enough.

Stacked the 17 frames in the dropbox (should it be 40?) myself and found that it produces a very overexposed image, Looks like the core is lost along with the starcores of most stars, not related to the video but taking shorter subs in the first place remedies this. Also i found that photometric color calibration does not find any suitable stars in the image, this usually means the data is already stretched or woefully overexposed, both are possible but i suspect a stretch looking at the histogram. Did you stretch the images or modify them in any way in the Fuji to TIFF conversion process? Modifying single subs in any way always leads to lost data, especially if there is any kind of stretch on them.

In my opinion the sooner one learns to not do all of the processing in Gimp or Photoshop the better and Siril is a great tool for that. Very easy to learn as it only has a few features but extremely helpful!

  • Like 1
  • Thanks 1
Link to comment
Share on other sites

2 hours ago, ONIKKINEN said:

Hope you don't mind some criticism of the process, but i have to comment. By the way i like the video as content and your videos in general, but as a tutorial this one sets bad examples (IMO). Noticed a few glaring issues right away that in my opinion should not be told to beginners, dont get me wrong its easy to understand and better than going around processing blind and with no instructions, but better alternatives are not more difficult to do! These things took me unnecessarily long to unlearn when i finally jumped to SIRIL (free!) to do the linear part processing. The final part that you mentioned in the video of processing always turning up different is born of these bad practices, and goes away with a few easy methods done early on.

Thanks! and I don't mind at all :) I'm a complete Jack of all trades and master of non so I posted in full knowledge that it might be scrutinised by the more discerning hardcore imagers out there.  I've never used SIRIL so I can always look into it and do another tutorial involving that once I've got to grips with it. It does add a nice surprise element getting a different result every time though, I might miss that as I like getting different versions from the same data 😆

2 hours ago, ONIKKINEN said:

The problem with processing entirely in "normal" processing software is that you need to stretch the image first to see anything, when ideally a lot of the processing is done while the data is still linear. In Siril you can do the first steps: crop artifacts out, background extraction (gradient removal attempts), color balance (important in linear phase to have as much precision as possible) and then stretch the image. From that point onwards processing in Gimp or any other "normal" software is a piece of cake. Its hard to do the first steps wrong in Siril, but very easy in Gimp, which is why you have different results every time. Playing with levels and curves early on is also not a good idea and should be done on the almost finished image as a final touch up.

Well this is the thing like you say, when using DSS and going straight to GIMP you need to stretch the data to see something that you can actually work on. I probably should get another Canon camera for tutorials though because the Fuji has some weird quarks: one of which is that when you import the stacked image into GIMP it looks super bright like a flat panel, whereas a Canon camera looks much darker and you can hardly see anything. I had to move the histogram to the left in DSS to make it usable in GIMP, but I've provided both the RAWS and master tiff so people can take the process from the start if they wish. I recommend they use the slightly tweaked master Tiff though as working with Fuji RAW takes some getting used to in DSS. 

 

2 hours ago, ONIKKINEN said:

First of all, this example shot sort of works because you have a supernaturally well visible picture straight out of the stacker! Personally i have never seen anything but the few brightest stars straight out of DSS and people who have similar almost black stacks will not be able to work with this tutorial. If you had a typical black/gray/brown image out of DSS the processing would not work at all because: The stack out of DSS in your case is in 16bit, which results in poor precision and separated values of pixels like shown in the first levels stretch section at 2 minutes onwards (the spikes in histogram values). This detail is unrecoverable and will result in a "choppy" final image as there is no precision to draw detail from. In your video you can clearly see the overblown parts do not smoothly transition to the less exposed parts but have clear separations of pixel values. The stack should always be in 32bit, there are no downsides! I am not Vlaiv but i think i have read enough of his comments to catch the 32bit good, 16bit bad vibe 😁. By the way it does not matter what bitrate the capture camera was, stacking multiple subs increases bitrate and 16bit is just not enough.

Ah yes see my previous point, however I've got to disagree with your point that people won't be able to work with this tutorial with a freshly stacked image. You can pull up an invisible image using levels adjustments as I do in this tutorial. The only difference is that you an see a faint galaxy to begin with here. If I had made the image even darker in DSS to emulate a Canon Camera for example, I would have just made more adjustments to the levels but in the same way. I've been using GIMP for years now so pulled up plenty of black images to reveal DSO's using the levels. 

I honestly didn't know using 32bit would help with a 14bit camera. 

2 hours ago, ONIKKINEN said:

Stacked the 17 frames in the dropbox (should it be 40?) myself and found that it produces a very overexposed image, Looks like the core is lost along with the starcores of most stars, not related to the video but taking shorter subs in the first place remedies this. Also i found that photometric color calibration does not find any suitable stars in the image, this usually means the data is already stretched or woefully overexposed, both are possible but i suspect a stretch looking at the histogram. Did you stretch the images or modify them in any way in the Fuji to TIFF conversion process? Modifying single subs in any way always leads to lost data, especially if there is any kind of stretch on them.

In my opinion the sooner one learns to not do all of the processing in Gimp or Photoshop the better and Siril is a great tool for that. Very easy to learn as it only has a few features but extremely helpful!

It should be around 40 files your right, but I downloaded the free version of Drop Box for the purpose of this video and maxed out the storage with 17 RAF files converted to RAW, and a Master Tiff.

Yes when stacking Fuji converted files in DSS they come out very over exposed for me also. This is using just 2 minute subs as well! The data is far from ideal with only light frames and no LP filter used under Bortle 6 skies. I can imagine people would have an easier time with better data but I just wanted to give beginners an Idea about using levels, curves, and colour balancing etc.   

So to summarise, would you say the main take away would be to use SIRIL and maybe not use a Fuji camera for tutorials as they are very much their own beast? It's possible that the RAF to Tiff converter doesn't do the Fuji data any favours. It's a shame DSS wont accept Fuji RAF files as they are, but the Fuji's inherent Ha sensitivity is almost worth the hassle. 

I would love to know more about using 32 bit files, can this really help if the camera is 14bit? 

Thanks for taking the time for such a thorough reply and feedback, it's much appreciated :)

 

Edited by Chris
Link to comment
Share on other sites

3 hours ago, JamesF said:

Good idea.  It's expensive enough as it is getting started in imaging.  Being able to save a bit of cash when starting out by using GIMP instead of the paid-for alternatives has got to be helpful.

James

Thanks James! and it's good to hear that SIRIL which Onikkinen recommends is free also :) I know the paid versions have many more tools, but I feel a lot of people starting out would be happy just getting a recognisable image and the discerning part comes later. A bit like how we don't notice optical aberrations as much when we start out observing. 

Link to comment
Share on other sites

52 minutes ago, Chris said:

Well this is the thing like you say, when using DSS and going straight to GIMP you need to stretch the data to see something that you can actually work on. I probably should get another Canon camera for tutorials though because the Fuji has some weird quarks: one of which is that when you import the stacked image into GIMP it looks super bright like a flat panel, whereas a Canon camera looks much darker and you can hardly see anything. I had to move the histogram to the left in DSS to make it usable in GIMP, but I've provided both the RAWS and master tiff so people can take the process from the start if they wish. I recommend they use the slightly tweaked master Tiff though as working with Fuji RAW takes some getting used to in DSS.

My point on the typical-black-image was that such an image has all of the signal in a very tight area of the histogram when linear, unlike the example here which is actually quite bright. My shots with either a DSLR or now an astro cam always have all of the faint signal (nebulosity, the thing i want) within the first 500 ADUs or so. ADU = analog to digital unit which is a fancy way to say pixel value. On a 16 bit image which is 2^16 you have 65536 ADUs or posssible brightness levels of a single pixel (per colour channel). If all of the data excluding star cores and cores of galaxies is within 500 values, you essentially get less than 1% of the dynamic range of the 16bit depth to work with. When this is stretched to cover more of the histogram, say to cover all the way to 22k or about a third of the way to the right you now have stretched the initial 500 values to cover an area 44 times the range! This will definitely lead to the sharp brightness gradients and the image will be mostly unrecoverable. Using 32bit precision you get a ridiculous amount of precision, billions of ADUs i think? With that, you are no longer limited by the data precision, but the amount and quality of subs captured. @vlaiv is the master of explanations on the matter and i am mostly parroting what i have read from the man 😃. See this recent thread on bit depth:

If the conversion process from Fuji to TIFF is weird somehow and alters the histogram, then im at a loss. What about shorter subs, like 30s. Are they as bright? 120s subs from light pollution could also just be too much and that's why everything is far too bright. The long subs required for astrophotography is actually mostly a myth these days, probably also applicable to the Fuji. Cameras are getting better and better and most times subs under a minute in light pollution is more than enough. Scouring around the internet some people claim that the XT-1 raws are supported by DSS with no conversion to TIFF first? Wouldn't know if that's true though, worth a try?

Still, try stacking in 32bit. There should be a noticeable improvement. If i try to follow the tutorial on my very dark stacks converted to 16bit, i get a horrible mess with sharp gradients between bright and less bright parts as a result. What the stack looks like doesn't really matter, as stretching will bring out the detail in the image. That is if there is enough bit depth to do that.

One of the biggest strengths of Siril is that there is a "preview mode" where you can preview the image as it would be if it was stretched (autostretch mode or histogram mode for full range) but keep the pixel values unchanged in the background.

Edit: and i also got an almost entirely white image from the 17 stacked subs in DSS. Black point adjustments made the image normal-ish looking.

Edited by ONIKKINEN
Forgot something
  • Thanks 1
Link to comment
Share on other sites

29 minutes ago, ONIKKINEN said:

My point on the typical-black-image was that such an image has all of the signal in a very tight area of the histogram when linear, unlike the example here which is actually quite bright. My shots with either a DSLR or now an astro cam always have all of the faint signal (nebulosity, the thing i want) within the first 500 ADUs or so. ADU = analog to digital unit which is a fancy way to say pixel value. On a 16 bit image which is 2^16 you have 65536 ADUs or posssible brightness levels of a single pixel (per colour channel). If all of the data excluding star cores and cores of galaxies is within 500 values, you essentially get less than 1% of the dynamic range of the 16bit depth to work with. When this is stretched to cover more of the histogram, say to cover all the way to 22k or about a third of the way to the right you now have stretched the initial 500 values to cover an area 44 times the range! This will definitely lead to the sharp brightness gradients and the image will be mostly unrecoverable. Using 32bit precision you get a ridiculous amount of precision, billions of ADUs i think? With that, you are no longer limited by the data precision, but the amount and quality of subs captured. @vlaiv is the master of explanations on the matter and i am mostly parroting what i have read from the man 😃. See this recent thread on bit depth:

If the conversion process from Fuji to TIFF is weird somehow and alters the histogram, then im at a loss. What about shorter subs, like 30s. Are they as bright? 120s subs from light pollution could also just be too much and that's why everything is far too bright. The long subs required for astrophotography is actually mostly a myth these days, probably also applicable to the Fuji. Cameras are getting better and better and most times subs under a minute in light pollution is more than enough. Scouring around the internet some people claim that the XT-1 raws are supported by DSS with no conversion to TIFF first? Wouldn't know if that's true though, worth a try?

Still, try stacking in 32bit. There should be a noticeable improvement. If i try to follow the tutorial on my very dark stacks converted to 16bit, i get a horrible mess with sharp gradients between bright and less bright parts as a result. What the stack looks like doesn't really matter, as stretching will bring out the detail in the image. That is if there is enough bit depth to do that.

One of the biggest strengths of Siril is that there is a "preview mode" where you can preview the image as it would be if it was stretched (autostretch mode or histogram mode for full range) but keep the pixel values unchanged in the background.

Edit: and i also got an almost entirely white image from the 17 stacked subs in DSS. Black point adjustments made the image normal-ish looking.

Thank you, I think I've got the gist of what you're saying but I'll check out the thread, and I look forward to comparing 32bit to 16bit Tiff files and seeing how much difference it makes. It might be a good idea for a video if I had two tabs open and performed the same processing steps on both 16bit and 32bit data to see how and where they differ. Just to check we're simply talking about saving as 32 bit in DSS rather than 16bit right? 

I tried stacking Fuji RAF files in DSS with no luck unfortunately and this lead me to paying for one of the online file converters.

I did wonder how Siril enabled you to perform changes before stretching, it's a shame GIMP doesn't have a preview mode but I'm looking forward to checking out SIRIL anyway. I had heard of it and it would be good to try something new. I use 3 bits of software for processing planetary and Lunar so the extra step for DSO's wont be a problem.  

Edited by Chris
  • Like 1
Link to comment
Share on other sites

13 minutes ago, Chris said:

 Just to check we're simply talking about saving as 32 bit in DSS rather than 16bit right?

Yes, a 32bit rational format. DSS actually does all of the background stuff in this format already, and the autosave file that is automatically created is in 32bit rational (which is floating point, the subject of the linked other thread).

I am curious how big of a difference it is in your case as the subs themselves appear already stretched, but by stacking 40 subs with initial 14 bits you should be getting a 21bit depth final product if im not wrong on the math part. That means saving as 16bit will have the image be compressed from 21 effective bits to 16 bits = 5 bits of precision are lost.

  • Like 1
Link to comment
Share on other sites

5 hours ago, Chris said:

I honestly didn't know using 32bit would help with a 14bit camera. 

Always use 32bit float point precision for all steps past initial recording in camera. Start from calibration and keep using 32bit precision.

Bits of precision that you add with number of stacked subs is equal to log base 2 of number of subs.

If you say stack 32 subs - it is increasing needed (fixed point) precision by 5 additional bits - you can see how easily you can go over 16 bits even with 12 bit camera. Not to mention calibration part where you operate with masters created by stack that themselves need to be recorded in higher precision.

For me it is very hard to even imagine doing beginners tutorial on processing. I complicate things too much :D - and that is never good for beginners.

I echo above comment - don't do any stretching in DSS itself

 

  • Thanks 1
Link to comment
Share on other sites

@Chris

I'd like to have a go at processing those files (maybe even do tutorial myself :D ), but there seem to be just a small number of them.

I looked up RAF file format and it appears that Adobe utility (free) can convert those to .DNG filter format.

https://helpx.adobe.com/camera-raw/using/adobe-dng-converter.html

Maybe if you upload original RAF files instead - maybe all 40 or how many do you have?

Link to comment
Share on other sites

2 minutes ago, vlaiv said:

@Chris

I'd like to have a go at processing those files (maybe even do tutorial myself :D ), but there seem to be just a small number of them.

I looked up RAF file format and it appears that Adobe utility (free) can convert those to .DNG filter format.

https://helpx.adobe.com/camera-raw/using/adobe-dng-converter.html

Maybe if you upload original RAF files instead - maybe all 40 or how many do you have?

Well it would be great to see what you get out of the data Vlaiv :) I'm going to try the 32bit floating point approach also but I'm all processed out tonight and might just have a beer instead :D 

I've just checked and have 55 RAF files, and I stacked the best 80% in DSS to make up the master tiff file.

The 55x RAF files take up 1.71GB and the SGL limit is 1GB. I'll attempt uploading here in two parts as I'm not keen on disturbing the Drop Box files for the video. 

Here are the first 30 RAF files:

 

 

 

DSCF0057.RAF DSCF0058.RAF DSCF0059.RAF DSCF0060.RAF DSCF0061.RAF DSCF0062.RAF DSCF0063.RAF DSCF0064.RAF DSCF0065.RAF DSCF0066.RAF DSCF0067.RAF DSCF0068.RAF DSCF0069.RAF DSCF0070.RAF DSCF0071.RAF DSCF0072.RAF DSCF0073.RAF DSCF0074.RAF DSCF0075.RAF DSCF0076.RAF DSCF0077.RAF DSCF0078.RAF DSCF0079.RAF DSCF0080.RAF DSCF0081.RAF DSCF0082.RAF DSCF0083.RAF DSCF0084.RAF DSCF0085.RAF DSCF0086.RAF

Link to comment
Share on other sites

Looking at one of the subs in photoshop, which apparently opens .RAF files just fine i can see that they look nothing like the Dropbox .TIFF files 🧐.  But the plot thickens! At first Photoshop opens raw files in Camera Raw where the histogram is where one would expect it to be, somewhere close to the left side but once clicked done and opened in photoshop the sub looks almost white and the histogram is all the way to the right and now it does look exactly like the Dropbox TIFFs. Something funny is going on in the conversion process and this is the cause for the odd looking subs/stack.

  • Like 1
Link to comment
Share on other sites

I just realized that:

1. version 4.2.6 of Deep Sky Stacker opens .RAF files but it is painfully slow

2. FitsWork can easily convert .RAF files into .fits but

3. Fuji X-T1 uses X-Trans sensor that does not have regular bayer matrix - it has some weird 6x6 matrix:

https://en.wikipedia.org/wiki/Fujifilm_X-Trans_sensor

Looking to see if dcraw will read and output proper color image - it will.

 

Link to comment
Share on other sites

DSS 4.2.6 handled the RAWs for me, but it took very long even with just 40 best subs stacked and no calibration frames. The process was left hanging with the "not responding" message for about 20s per sub and then the registering or stacking went on. I suspect slower machines would make this pretty much impossible as i have a hefty PC.

The stacked image looks like i would expect, the typical brown/red glare of city lights and not much else. Still very bright but this i don't think is a problem since the histogram is at half and not much seems overexposed. Still, probably shorter subs would be ideal in these conditions. Below is a PNG screen capture from a quickly touched upon image from Siril. You can see that the core is intact, starcores are not blown and there are no obvious separation lines between brighter and less bright pixels. The data is pretty nice by the way, more exposure and calibration frames and this will be a really nice shot.

2021-12-04T23_07_35.thumb.png.99558664bf3a95818c7729938d8c0249.png

I suspect that for some machines the whatever thing DSS is doing for each sub that takes forever is not such a good option. Lets say you triple the subs to 120 and take 30+30+30 calibration frames, it would take hours upon hours to stack this. But it does at least work better than the converted subs.

  • Like 1
Link to comment
Share on other sites

9 hours ago, ONIKKINEN said:

Looking at one of the subs in photoshop, which apparently opens .RAF files just fine i can see that they look nothing like the Dropbox .TIFF files 🧐.  But the plot thickens! At first Photoshop opens raw files in Camera Raw where the histogram is where one would expect it to be, somewhere close to the left side but once clicked done and opened in photoshop the sub looks almost white and the histogram is all the way to the right and now it does look exactly like the Dropbox TIFFs. Something funny is going on in the conversion process and this is the cause for the odd looking subs/stack.

I was guessing the conversion process was to blame, so thanks for confirming it! 

 

Link to comment
Share on other sites

9 hours ago, vlaiv said:

I just realized that:

1. version 4.2.6 of Deep Sky Stacker opens .RAF files but it is painfully slow

2. FitsWork can easily convert .RAF files into .fits but

3. Fuji X-T1 uses X-Trans sensor that does not have regular bayer matrix - it has some weird 6x6 matrix:

https://en.wikipedia.org/wiki/Fujifilm_X-Trans_sensor

Looking to see if dcraw will read and output proper color image - it will.

 

I have version 4.2.6 and although it opens the files, the finished stack only applies itself to a fraction of the image area on my machine at least, almost like it crashes. I tried a number of things before I went the file conversion route a few months back.

Most Fuji's use an Xtrans sensor, however I used to own one of their entry level cameras with a Bayer filter and that has the same issues. 

I believe GIMP uses dcraw, I can open RAF files in GIMP ok, stacking really is the sticking point but Sequator might work?, I've not tried that yet. 

 

Link to comment
Share on other sites

9 hours ago, ONIKKINEN said:

DSS 4.2.6 handled the RAWs for me, but it took very long even with just 40 best subs stacked and no calibration frames. The process was left hanging with the "not responding" message for about 20s per sub and then the registering or stacking went on. I suspect slower machines would make this pretty much impossible as i have a hefty PC.

The stacked image looks like i would expect, the typical brown/red glare of city lights and not much else. Still very bright but this i don't think is a problem since the histogram is at half and not much seems overexposed. Still, probably shorter subs would be ideal in these conditions. Below is a PNG screen capture from a quickly touched upon image from Siril. You can see that the core is intact, starcores are not blown and there are no obvious separation lines between brighter and less bright pixels. The data is pretty nice by the way, more exposure and calibration frames and this will be a really nice shot.

2021-12-04T23_07_35.thumb.png.99558664bf3a95818c7729938d8c0249.png

I suspect that for some machines the whatever thing DSS is doing for each sub that takes forever is not such a good option. Lets say you triple the subs to 120 and take 30+30+30 calibration frames, it would take hours upon hours to stack this. But it does at least work better than the converted subs.

Nice one, that looks really good! My PC runs a Ryzen 5 processer and 8GB of RAM and crashes when it reveals the final stack :( It only gets as far as applying the stack to the top left section of the image. It's nice to see the results you can get when using the non converted files. The conversion process really messes things up in summary. 

I managed one version where the core was well controlled, however the stars look better using the RAF files here for sure. Lovely tight stars from the StellaMira 90 EDT also. 

The Fuji is a real beast for astro considering it came out in 2014. Shame it's a pain but good to see it's potential 

Edited by Chris
Link to comment
Share on other sites

31 minutes ago, Chris said:

I believe GIMP uses dcraw, I can open RAF files in GIMP ok, stacking really is the sticking point but Sequator might work?, I've not tried that yet. 

I used command line dcraw to convert files to TIFFs, and those were stacked fine by DSS.

Try using that DNG converter I linked to - it is supposed to work fine.

  • Like 1
Link to comment
Share on other sites

I’d like to thank you Chris for teaching me those bad habits 😂.  I spent a couple of hours last night using GIMP to improve the photos I’ve taken using SharpCap over the last 12 months and I’m delighted with the results.  To date, I’d left post processing on the ‘too hard’ shelf.  

I also think it’s very brave to put yourself out there on YouTube and post on SGL knowing your work will be scrutinised.  I’m glad to see the feedback has been nothing but highly constructive which is typical of the supportive community we have on SGL.  I’m just hoping that this sets a challenge for all the experts out there and we’ll see lots more end to end tutorials for beginners like me. 🤞

  • Like 2
  • Thanks 1
Link to comment
Share on other sites

Ok, this was sort  of a nightmare to process.

I like calibrated data and really struggled with vignetting and dust shadows on this one. Attempted to make synthetic flats - and it sort of worked, except in two places:

m33.thumb.jpg.5e1b936ceb1caee0458d3e8816df7de5.jpg

  • Like 1
Link to comment
Share on other sites

4 hours ago, Chris said:

Nice one, that looks really good! My PC runs a Ryzen 5 processer and 8GB of RAM and crashes when it reveals the final stack :( It only gets as far as applying the stack to the top left section of the image. It's nice to see the results you can get when using the non converted files. The conversion process really messes things up in summary. 

I managed one version where the core was well controlled, however the stars look better using the RAF files here for sure. Lovely tight stars from the StellaMira 90 EDT also. 

The Fuji is a real beast for astro considering it came out in 2014. Shame it's a pain but good to see it's potential 

8GB ram with (assuming) Windows 10 is probably to blame here, if you can source another stick of ram to put in your machine you might be able to do it. The process seemed very ram hungry and completely seized my PC for a few seconds with every sub, and that is with 16GB ram. Also, echoing what Vlaiv mentioned about conversion to TIFFs. Adobe Camera Raw, while not free can batch convert the .RAFs into TIFFs. The conversion took a while but not nearly as long as the stacking with the raws, probably just a few minutes. Then stacking with the TIFFs was as normal as can be, subs fly past the registering and stacking process. So if you find a tool that doesn't do whatever the original converter did to the files you should be able to stack. Interestingly i did find that the stack with the raws and stack with the TIFFs had different histograms, so something must be different even when to my eye they are equivalent quality. Probably a difference in the debayering method doing something different in DSS and Camera Raw.

The Fuji does appear to work really well here, the purple parts are H-alpha which would not be visible nearly as much with an equivalent Canon or Nikon camera.

  • Thanks 1
Link to comment
Share on other sites

8 hours ago, Priesters said:

I’d like to thank you Chris for teaching me those bad habits 😂.  I spent a couple of hours last night using GIMP to improve the photos I’ve taken using SharpCap over the last 12 months and I’m delighted with the results.  To date, I’d left post processing on the ‘too hard’ shelf.  

I also think it’s very brave to put yourself out there on YouTube and post on SGL knowing your work will be scrutinised.  I’m glad to see the feedback has been nothing but highly constructive which is typical of the supportive community we have on SGL.  I’m just hoping that this sets a challenge for all the experts out there and we’ll see lots more end to end tutorials for beginners like me. 🤞

Thanks Priesters! It's great to have some positive feedback from yourself, you very much sound like my target audience! I'm pleased my little GIMP tutorial helped out :)

I must admit I took a deep breath before posting as I know how divisive the subject can be! However, I'm happy to see it's been a largely positive and educational thread with lots of great points and resources touched upon, and agree it would be great to see others post their techniques so we can all learn what we don't think we're missing. 

It's all too easy to think you're doing things the best way until someone points out a better way lol

Link to comment
Share on other sites

6 hours ago, vlaiv said:

Ok, this was sort  of a nightmare to process.

I like calibrated data and really struggled with vignetting and dust shadows on this one. Attempted to make synthetic flats - and it sort of worked, except in two places:

m33.thumb.jpg.5e1b936ceb1caee0458d3e8816df7de5.jpg

Well I certainly didn't make it easy for you with just light frames, Vlaiv, but I'm loving those stars :) Looking at the two renditions above I think I need to calm down with how much I stretch my data lol You two know when to stop and it looks cleaner as a result!   

Link to comment
Share on other sites

Yeeah! Managed to stack Fuji RAF files in DSS this time round! It must have just crashed last time.

Image of the California Nebeula saved as 32bit. I also used an Optolong L-Enhance filter which seems to help quite a bit with the vignetting which is a real bonus!

Processed using the same method as before. 

California Neb_DSS.png

Edited by Chris
  • Like 2
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.