Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

Another go at M31. Getting the most out of minimal data!


Xiga

Recommended Posts

Ok, so i finally managed to get out to a dark sky location (green zone on an LP map) to acquire some (hopefully decent!) M31 data. 

And wow, the difference in the quality of data when taken from a dark sky location compared to a red zone location is night and day! 

Unfortunately though, i was fighting the tech gremlins for most of the 3 hrs i was there, so i only managed to gather a total of 27 mins of exposure (1 x 6 mins and 3 x 7 mins, all dithered) at ISO 800. I took about 15 flats, and created a master Bias from about 300 or so. I had to take darks indoors, the following night (again, about 15ish) so seeing as the temps were hugely different, i unticked 'Dark Optimization' in DSS and just used them for Hot Pixel Detection instead. Not sure if this method even makes sense?!

After experimenting with DSS, it would seem that (for me at least) it is much easier to bring out the colours in PS if i choose the 'Per Channel Background Calibration' option in DSS instead of 'RGB Channels Background Calibration'. Not sure if others have had the same experience?

As you can imagine, keeping the noise under control was the hardest part of the processing. I've included a link below for a .zip file with the 4 lights, and master Bias, Flat, and Dark files, in case anyone out there fancies a challenge and wants to show off their processing skills, as i'm not sure how i'm progressing tbh. 

My rough workflow goes something like:

1. Stack in DSS (in addition to the two points above, i used Average for the Lights, as i had so few to work with)

2. Crop out the stacking artefacts in PixInsight L.E. No need for DBE, as far as i could tell. 

3. DDP (without sharpening) in Maxim DL to get a near optimum stretch (i backed off a bit on the background setting to darken it a bit)

4. Then into PS for some Carboni's Tools and the usual guff. I boosted the colour by creating 2 new layers, making the top Soft Light, merging it to the one below which in turn i changed to Color. Repeat as necessary. 

Tips most welcome!

Link for Files:

http://1drv.ms/1HbU0x0

My effort:

post-27374-0-15073100-1447989055_thumb.j

Clear skies!

Link to comment
Share on other sites

  • Replies 30
  • Created
  • Last Reply

That's really excellent for only 27 minutes of data!

As you say, a dark site really helps but I think you're doing yourself down a lot on your processing skills - that's the other half of the battle and it looks like you've done a great job to me.

I'd also agree with your observations on  RGB vs Per channel in DSS.

Link to comment
Share on other sites

  • 1 month later...

Sorry to keep flogging a dead horse, but the weather's been so horrendous here in the UK recently i haven't been able to even attempt any imaging in over a month, so i thought i'd have another go at my only set of data  :tongue:

The main difference this time is i dropped the darks. I knew at the time that they were massively different temperature-wise, but i used them anyway as i didn't know of any other way to remove all the red and blue hot pixels with only having 4 subframes to work with. 

In the end i realised that the extra noise the darks were adding was not worth the benefit of removing the hot pixels, so i re-stacked in DSS (not drizzled this time, mostly just cos it's a pain!) and had another go in Photoshop to see if i could minimise the hot pixels myself. And as it turned out i actually could.

I used the magic wand (on Point Sample) and selected one of the rogue pixels, then used Select->Similar to get all the likewise pixels (i played with the Tolerance level to make sure it was only selecting the ones i wanted) and then ran Filter->Noise->Reduce Noise (with defaults, but with Reduce Color Noise set to 50%). I did this for red and blue, and lo-and-behold it actually cleaned up the image nicely, without destroying any important detail, and with the obvious added benefit of a massively reduced level of noise  :smiley:

I think i'm finally done with this data set now, and am happy to call it my 1st proper DSO pic. 

You can all move along now  :tongue:

Clear skies folks! 

post-27374-0-75777700-1451444025_thumb.j

Link to comment
Share on other sites

If your using the 60D an easy way to kill bad pixels, is to do a sensor clean.

When the camera has reach temperature equilibrium invoke a sensor clean.

This sensor clean is found in the menus in Bulb or manual mode.

When a sensor clean is used the bad pixel map is updated and todays RAW convertors

should use the map to ignore all bad pixels.

Although I don't use this technique, normally forget, DSS does a good job of removing them

with default settings.

Link to comment
Share on other sites

You're not flogging a dead horse, you are engaged in trying to get the best picture you can, and that is the whole point of our game. I think this is very impressive with a good background sky, a good starfield, a balanced core and convincing colour. Say it quickly. That's a lot to get right.

Olly

Link to comment
Share on other sites

If your using the 60D an easy way to kill bad pixels, is to do a sensor clean.

When the camera has reach temperature equilibrium invoke a sensor clean.

This sensor clean is found in the menus in Bulb or manual mode.

When a sensor clean is used the bad pixel map is updated and todays RAW convertors

should use the map to ignore all bad pixels.

Although I don't use this technique, normally forget, DSS does a good job of removing them

with default settings.

Mike

I did think about opening the .CR2 files in ACR first, to remove the bad pixels, but it was my belief that ACR would debayer them in the process. I want to input the subs directly into DSS without them being altered in any way (such as being debayered) so i opted against doing so.

Am i mistaken here?

Link to comment
Share on other sites

You're not flogging a dead horse, you are engaged in trying to get the best picture you can, and that is the whole point of our game. I think this is very impressive with a good background sky, a good starfield, a balanced core and convincing colour. Say it quickly. That's a lot to get right.

Olly

Thanks Olly for your kind words :smiley: It's images such as yours that inspire newbies like me to have a go.

Apart from the obvious interest in all things Astronomy, i made the decision to take up AP as i generally like doing techy things, and so i thought i'd also enjoy the technical challenge of getting all the gear working and seeing the images come in one by one. And while this is indeed true, i now think what i actually enjoy even more is the processing side of things. I've spent more time than i'm willing to admit on this image :tongue: but it's all been a learning curve and looking back, i can honestly say i've enjoyed working on it more than i ever thought i would.

Next step is to start looking through my Christmas pressie (below). From the quick skim i've had, i think most of it will be waaaay over my head (and also for equipment well beyond my budget at present) but hopefully there'll still be some gems in there a simple DSLR guy like myself can make use of :grin:

post-27374-0-61253800-1451510532_thumb.j

Oh btw, the link at the top is still active if anyone wants to have a go themselves. I'd be interested to see what else can be teased out.

Link to comment
Share on other sites

  • 2 weeks later...

Okay so it turns out i don't know when to quit after all :rolleyes2:  The weather's been so abysmal here in the UK recently :clouds2:  i'm literally left with not other option but to have another go at processing this high quality (but low quantity) data. 

Differences this time were:

Using Lab Colour. It's tricky, but it's by far the best way to balance the colours before moving on to saturation. 

I went back to using the darks (perhaps foolishly) because they cleaned up so many bad R,G,and B pixels. The side effect was some holes in the data from the mismatched darks subtraction, however, i was able to combat this by adding a Median Noise filter of 2 pixels in a new layer, then using a layer mask and painting in only the areas where i wanted to make the corrections (mainly just parts of the spiral arms, it's not so noticeable in the deep space). 

Magic Wand tool is out. My new friend is Select->Color Range. I had to use this a lot, but it really helped!

Carboni's Tools ->Enhance DSO and Reduce Stars. At first i wasn't sure if i liked this. It seems to blur all the stars slightly, but it certainly does do a good job of bringing the galaxy to the forefront and making the stars seem more like a part of the background, which i suppose is a good thing.

And finally, i stupidly didn't know to frame the shot diagonally, instead going for a horizontal one. Doh! So, just for kicks, i rotated it in PS and clone stamped in the top-left and bottom-right triangular sections with some deep space (i know, tut-tut!). But hey, at least i now have a desktop background i can use until i get another crack at this target (hopefully next week!).

Clear skies folks! 

post-27374-0-92386500-1452393138_thumb.j

Link to comment
Share on other sites

Mike

I did think about opening the .CR2 files in ACR first, to remove the bad pixels, but it was my belief that ACR would debayer them in the process. I want to input the subs directly into DSS without them being altered in any way (such as being debayered) so i opted against doing so.

Am i mistaken here?

I missed your post.

ACR would debayer and apply a tone curve so we can see what is in the RAW.

Both programs use DCRAW to decode RAW files.

I don't know how each interprets the code but both should in theory remove bad pixels.

DSS can, I have seen it do it to my images sometimes, knock holes in stars.

I am now trying the Tony Hallas method of processing, with dithered subs.

Intial process in ACR and convert to tif.

Tifs into Registar to register and combine.

This is very promising and it removes all sorts of rubbish and stuff shows that would otherwise

not be visible.

Link to comment
Share on other sites

Great image and progress!

My workflow is similar to yours. After I saw the Tony Hallas video on Adobe CR processing in Photoshop  I have reprocessed more or less everything from the CR2 files particularly using the noise reduction in ACR (not an overwhelming task since I captured my first image in March 2015 and only have had about 17 nights). It has kept me busy while the sky is like it is, and with considerably less noise as a result. So my workflow now is (1) Batch processing the CR2 files in ACR with some stretching, noise reduction and a bit of sharpening, (2) Stacking the tiffs in Nebulosity 4, (3) Back to Photoshop for the fun part.

PS. I noticed that in my version of ACR (6.7), which is the latest that works with Photoshop CS5, the default setting was working in 8 bit mode (only apparent as a small text line at the bottom of the sheet) so I had to start over initially after I found that it can be set to 16 bit.

Link to comment
Share on other sites

Mike & Goran

I'm a bit confused about which method is best, using a standalone stacking program, or converting to TIF in ACR first before using a stacking program (aka 'Hallas' method). 

MIke, it seems strange to me that even with your dithered subs you say you are still seeing significant improvements with the Hallas method. The DSS site says that for those who had previously been feeding converted Raw files into DSS, that they would see an immediate improvement by just using the Raw files themselves. And if they are dithered, then even better, because as long as you have enough of them (>7 or so) and you use a stacking method such as Median Kappa-Sigma Clipping, then all the errant pixels should be taken care of. In my case this wasn't true, simply because i only had 4 subs to work with. How many do you usually use, and more importantly, what are your dither settings? Because you will need a dither of around 12 pixels to get the most out of dithering with a DSLR. If you're not dithering enough, perhaps this is why the Hallas method looks better?

Are either of you using Flats & Bias frames? Because i did a couple of test stacks the other night, both identical except one was with stacked TIf's the other with Raw's, and compared the two in PS. The Tiff's image was indeed cleaner in many places, virtually no errant pixels, however, there was a serious problem in that the flats had clearly not worked, and given the overall quality of the image i strongly suspected that the bias files had not worked either. In the end, the stacked Raw's image was easily the clear winner for me, so i ditched the Tiff's method. I'm not sure of the technical reason, but i don't think DSS likes using debayered Bias & Flat frames. 

Something i noticed when doing the above test. My 60D raw files show in Windows as being 5,184 x 3,456 pixels (which is supposedly the correct size), however, when input into DSS they show up as 5,202 x 3,465. The master flat & bias frames that come out of DSS are also 5,202 x 3,465. I'm not sure why DS is different? Also, once i convert a raw file to Tif in ACR it then shows up in DSS as 5,184 x 3,465. I first tried stacking the master flat & bias with the original .CR" files, but DSS gave an error as the files were not matched (i.e different sizes). I then tried stacking my master flat & bias along with the converted Tif lights in DSS, but it gave the same error again because the converted Tiff's show up as 'RGB 16 bit/ch' (i.e debayered) whereas the master frames show up as 'Gray 16 bit' (i.e un-debayered). So finally, i ran all of my original .CR2 calibration frames through ACR as well so that all of the files would match in size & type. This was how i was able to get them to stack in DSS, although as stated above, it clearly didn't work as the image had severe gradient & vignetting issues. 

I will definitely go back and test again at some point, as ACR definitely does clean up the files once it opens them, so it would be nice to take advantage of this feature (but only if it means i can still use Bias & Flat frames). Now if only there was a way to open them in ACR and then save them down again but still in their original un-debayered state then we'd be inbusiness, but i think this is possible?

Link to comment
Share on other sites

Mike & Goran

I'm a bit confused about which method is best, using a standalone stacking program, or converting to TIF in ACR first before using a stacking program (aka 'Hallas' method). 

MIke, it seems strange to me that even with your dithered subs you say you are still seeing significant improvements with the Hallas method. The DSS site says that for those who had previously been feeding converted Raw files into DSS, that they would see an immediate improvement by just using the Raw files themselves. And if they are dithered, then even better, because as long as you have enough of them (>7 or so) and you use a stacking method such as Median Kappa-Sigma Clipping, then all the errant pixels should be taken care of. In my case this wasn't true, simply because i only had 4 subs to work with. How many do you usually use, and more importantly, what are your dither settings? Because you will need a dither of around 12 pixels to get the most out of dithering with a DSLR. If you're not dithering enough, perhaps this is why the Hallas method looks better?

Are either of you using Flats & Bias frames? Because i did a couple of test stacks the other night, both identical except one was with stacked TIf's the other with Raw's, and compared the two in PS. The Tiff's image was indeed cleaner in many places, virtually no errant pixels, however, there was a serious problem in that the flats had clearly not worked, and given the overall quality of the image i strongly suspected that the bias files had not worked either. In the end, the stacked Raw's image was easily the clear winner for me, so i ditched the Tiff's method. I'm not sure of the technical reason, but i don't think DSS likes using debayered Bias & Flat frames. 

Something i noticed when doing the above test. My 60D raw files show in Windows as being 5,184 x 3,456 pixels (which is supposedly the correct size), however, when input into DSS they show up as 5,202 x 3,465. The master flat & bias frames that come out of DSS are also 5,202 x 3,465. I'm not sure why DS is different? Also, once i convert a raw file to Tif in ACR it then shows up in DSS as 5,184 x 3,465. I first tried stacking the master flat & bias with the original .CR" files, but DSS gave an error as the files were not matched (i.e different sizes). I then tried stacking my master flat & bias along with the converted Tif lights in DSS, but it gave the same error again because the converted Tiff's show up as 'RGB 16 bit/ch' (i.e debayered) whereas the master frames show up as 'Gray 16 bit' (i.e un-debayered). So finally, i ran all of my original .CR2 calibration frames through ACR as well so that all of the files would match in size & type. This was how i was able to get them to stack in DSS, although as stated above, it clearly didn't work as the image had severe gradient & vignetting issues. 

I will definitely go back and test again at some point, as ACR definitely does clean up the files once it opens them, so it would be nice to take advantage of this feature (but only if it means i can still use Bias & Flat frames). Now if only there was a way to open them in ACR and then save them down again but still in their original un-debayered state then we'd be inbusiness, but i think this is possible?

I can't help you with DSS since I never used it, I stack my tiff in Nebulosity so the only two programs I use are Phostoshop CS5 and Nebulosity 4 (try to keep it simple since I am still on my first year in AP). I also do not use bias or flats with my DSLR. However, I read a thread somewhere in a professional photographer forum about sawing raw files and the conclusion was that it is not possible and there is no program doing it. The only device that can produce raw files is your camera. The logic is that as soon as you do something to a raw file it is no longer raw. Therefore, after you stack CR2 files in DSS or Nebulosity you en up with fit or tif. Tony Hallas argument is that Adobe has enormous resources for developing image processing programs compared to the people behind DSS and Nebulosity, so the different functions in ACR are more advanced and better, including the noise reduction. When I have used Nebulosity to stack CR2 files, the tiff file I end up with is terribly pink and hard to correct in Photoshop. WIth ACR I get a much better starting point.

Link to comment
Share on other sites

Ah ok, so you're not using Bias or Flat frames. I guess that explains it. I suppose if one is using camera lenses instead of a telescope, or doesn't plan on using calibration frames, then the ACR method does look tempting. But for those of us using scopes and wanting to use calibration frames, then the old method of stacking Raw's is probably still the way to go.

As an aside, i did another test stack, this time with the converted Tif lights and no bias or flats. Tried to process it as similarly as before, but i still ran into difficulties with uneven illumination and gradients, and even though i found the colours much easier to process this time, the other issues meant it was much more difficult to process and the end result was nowhere near as good as the old method. Obviously though others may have more success than i had. 

So going forward i still see myself sticking to: Large Scale Dither (~12 pixels), Calibration Frames (but no Darks), and Stacking Raw's. 

Link to comment
Share on other sites

I do mainly use telescopes. It is just that I had not noticed much need for flats since I do not have much vignetting with my APS-C size chip (Canon 60Da), and the little there is can be taken off by ACR. With a mono CCD camera and filters I expect there will be more places for dust in the light path that need to be dealt with by flats (the sensor in my camera is self cleaning so no dust there). Maybe biases would help but I do not see that much noise either, and like with darks for DSLR I have read that bias frames may make things worse (http://stargazerslounge.com/topic/215859-dslr-bias-frames/). In any case, I assume the need for bias frames would be the same for a telescope and a camera lens.

Link to comment
Share on other sites

When i did the Tiff test stack (with no calibration frames) i had no dust bunnies in the image, even after stretching. This really surprised me, as i'd seen blobs in my first light test of my equipment, when i didn't take any flats. However, i'm pretty sensitivie to vignetting, with my APS-C sized sensor it's not a massive amount but it's enough to annoy me, and for me at least it also make processing a lot harder. I also haven't seen any vertical banding issues, so i presume my master bias is doing it's job? (DSS created it from 360 frames!). I might have another go, comparing lights + flats to lights + flats + bias, just to see how much of a difference there is due to the Bias that i can actually pick up on with my own eyes.

I also had an idea, but i'm not sure if it makes any sense! I was wondering, where exactly in an OSC image does the 'improvements' from Flats & Bias frames sit, is it mostly in the Luminence, or the RGB, or equally in both?

Because if it's mostly in the Luminence, would it not be possible to do the following:

1. Create two stacks. Stack 1 would use Lights only, based on converted Tiff's from ACR. Stack 2 would use Raw files and also include Flats & Bias frames (i.e old school).

2. Then split each stack into Synthetic Luminence & RGB in PS.

3. Combine the RGB from Stack 1 and the Luminence from Stack 2 into a new image.

I haven't done any separate Luminence & RGB processing yet myself (it's on my to-do list), but if the improvements from calibration frames mostly lie in the Luminence channel, would the above not work?

Link to comment
Share on other sites

Mike & Goran

I'm a bit confused about which method is best, using a standalone stacking program, or converting to TIF in ACR first before using a stacking program (aka 'Hallas' method). 

MIke, it seems strange to me that even with your dithered subs you say you are still seeing significant improvements with the Hallas method. The DSS site says that for those who had previously been feeding converted Raw files into DSS, that they would see an immediate improvement by just using the Raw files themselves. And if they are dithered, then even better, because as long as you have enough of them (>7 or so) and you use a stacking method such as Median Kappa-Sigma Clipping, then all the errant pixels should be taken care of. In my case this wasn't true, simply because i only had 4 subs to work with. How many do you usually use, and more importantly, what are your dither settings? Because you will need a dither of around 12 pixels to get the most out of dithering with a DSLR. If you're not dithering enough, perhaps this is why the Hallas method looks better?

Are either of you using Flats & Bias frames? Because i did a couple of test stacks the other night, both identical except one was with stacked TIf's the other with Raw's, and compared the two in PS. The Tiff's image was indeed cleaner in many places, virtually no errant pixels, however, there was a serious problem in that the flats had clearly not worked, and given the overall quality of the image i strongly suspected that the bias files had not worked either. In the end, the stacked Raw's image was easily the clear winner for me, so i ditched the Tiff's method. I'm not sure of the technical reason, but i don't think DSS likes using debayered Bias & Flat frames. 

Something i noticed when doing the above test. My 60D raw files show in Windows as being 5,184 x 3,456 pixels (which is supposedly the correct size), however, when input into DSS they show up as 5,202 x 3,465. The master flat & bias frames that come out of DSS are also 5,202 x 3,465. I'm not sure why DS is different? Also, once i convert a raw file to Tif in ACR it then shows up in DSS as 5,184 x 3,465. I first tried stacking the master flat & bias with the original .CR" files, but DSS gave an error as the files were not matched (i.e different sizes). I then tried stacking my master flat & bias along with the converted Tif lights in DSS, but it gave the same error again because the converted Tiff's show up as 'RGB 16 bit/ch' (i.e debayered) whereas the master frames show up as 'Gray 16 bit' (i.e un-debayered). So finally, i ran all of my original .CR2 calibration frames through ACR as well so that all of the files would match in size & type. This was how i was able to get them to stack in DSS, although as stated above, it clearly didn't work as the image had severe gradient & vignetting issues. 

I will definitely go back and test again at some point, as ACR definitely does clean up the files once it opens them, so it would be nice to take advantage of this feature (but only if it means i can still use Bias & Flat frames). Now if only there was a way to open them in ACR and then save them down again but still in their original un-debayered state then we'd be inbusiness, but i think this is possible?

I'm trying the Hallas method because I only use camera lenses for imaging.

I would think using DSS and RAW files should give a similar result.

Will try some data that I have already done the Hallas way and run it through DSS.

I have the Lacerta dithering 12pixels in what they call the snake method, all that is

as far as I can see is a spiral .

Would think the pattern is'nt critical as long as it moves 12 pixels.

I don't use darks, flats or bias.

The 60D has on sensor dark current suppression, according to Roger Clark, darks are not needed.

Flats I can deal with in ACR or Pixinsight.

The 60D does have some banding problems but these can be reduced with decent and lots of exposures.

Normally I'm looking for a minimum of 24 subs at 5minutes but will get as many as possible.

Link to comment
Share on other sites

That image is getting into the big league. I'd just watch the saturation in the cyans and, maybe, warm up the reds and yellows. (In fact my own M31 drives me crazy because I can't get the cyans down, so it's a case of 'do what I say, not what I do!!'  :rolleyes: ) Heheh. What a game.

Olly

Link to comment
Share on other sites

When i did the Tiff test stack (with no calibration frames) i had no dust bunnies in the image, even after stretching. This really surprised me, as i'd seen blobs in my first light test of my equipment, when i didn't take any flats. However, i'm pretty sensitivie to vignetting, with my APS-C sized sensor it's not a massive amount but it's enough to annoy me, and for me at least it also make processing a lot harder. I also haven't seen any vertical banding issues, so i presume my master bias is doing it's job? (DSS created it from 360 frames!). I might have another go, comparing lights + flats to lights + flats + bias, just to see how much of a difference there is due to the Bias that i can actually pick up on with my own eyes.

I also had an idea, but i'm not sure if it makes any sense! I was wondering, where exactly in an OSC image does the 'improvements' from Flats & Bias frames sit, is it mostly in the Luminence, or the RGB, or equally in both?

Because if it's mostly in the Luminence, would it not be possible to do the following:

1. Create two stacks. Stack 1 would use Lights only, based on converted Tiff's from ACR. Stack 2 would use Raw files and also include Flats & Bias frames (i.e old school).

2. Then split each stack into Synthetic Luminence & RGB in PS.

3. Combine the RGB from Stack 1 and the Luminence from Stack 2 into a new image.

I haven't done any separate Luminence & RGB processing yet myself (it's on my to-do list), but if the improvements from calibration frames mostly lie in the Luminence channel, would the above not work?

Claran,

Sounds interesting, but maybe you should start a new thread about the approach you suggest to get the attention of the experts.

My main noise problem before I started using ACR was the red-green mottle that Tony talks about in the video. Stacking with as may subs as possible and ACR noise reduction has now reduced my noise to a tolerable level. Also, I do most of my imaging at below 10 °C (last time it was - 14°C) so not much dark current so not much use of bias frames and virtually no hot pixels (when the spring comes and temperature rises it gets too light up here in Sweden for AP anyhow). I also think that modern DSLRs (and we can probably still count our D60 cameras among those) have much less of those problems and therefore less needs for at least darks. Indeed, the camera processor is probably reducing noise even before producing the CR2 files, at least it is doing something, see:

http://www.stark-labs.com/craig/resources/Articles-&-Reviews/CanonLinearity.pdf

Many people, including experienced experts like Olly, now says that darks for DSLR can do more harm than good if they are not at exactly the right temperature (if you want to know the temperature of each of your subs I just found this site: http://regex.info/exif.cgi).

I also have a rather dark sky (being 100 km from the nearest large city (Oslo) so I think that helps against gradients.

By the way, here is my latest M31. Also with a shortage of data as I only got 3 subs with the Canon 60Da before clouds moved in. The scope was my ES 5" apo with a 0.79x TS Photoline reducer giving an FL=750mm and f/5.9 on a NEQ6, so unlike you I did not really manage to fit it into the field of view (no flats, darks or bias).

post-44514-0-55427300-1452549974_thumb.j

Link to comment
Share on other sites

I know what you mean OIly, it does need a bit more on the reds & yellows. Now if only i could do it as easily as type it  :tongue:

Goran that's a fantastic image. You've managed the colours very well, it's nice and warm looking, which is just the way i like it. 

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.