Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

Reducer Myth revisited


Rodd

Recommended Posts

9 minutes ago, vlaiv said:

Indeed, both my data as your data is not gathered with reducer, I plan to use binning as it has the same effect as reducer for this purpose - it provides the same aperture at resolution

Give me some time. In NYC now. Will be home this evening 

Link to comment
Share on other sites

7 hours ago, ollypenrice said:

The Atik 460 tends to have noise in the form of overly dark pixels in the background sky and its star colour is - how can I say - not reliable or convincing and needs working on. It is also hard to stretch up the background sky without blowing out the rest. I end up settling for a darker sky than I prefer. The Moravian 8300 produces a colour balance at parity of exposure which needs considerable adjustment in post processing and the colour data seems curiously weak. I haven't been using it for all that long so it may be my fault, but I've done darks and flats absolutely by the book several times. It's also possible that the filters are not as they should be but they are in a size of which I have no others to swap round.

Olly

That's interesting because I feel like I've had the star color issue with my ASI1600. But I guess it could be my filters. 

Link to comment
Share on other sites

Vlaiv's 'Aperture at resolution' seems to be aiming at the same information as my 'flux per pixel,' I'd have thought but I may be wrong. I suppose 'Flux per pixel' would be incredibly hard to ascertain from theory since things like vignetting and image circle will come into it and be all but impossible to quantify.

Could we not just use a formula linking area of aperture with area of pixel? This would give a rough guide to pixel illumination. If we knew the light fall off heading outwards from the centre of the cone we could make it more precise. What a great website: Vlaiv's pixel illumination calculator. 👋  I've always been full of good ideas for other people. :icon_mrgreen:

Olly

  • Like 2
Link to comment
Share on other sites

I've been reading diagonally this thread, so I'm confused now 😄 

I'm on the verge of reducing my Tec 140 with the AP reducer to F5, I have a 3.7 micron camera ... or do I keep it at the native focal length ... The camera is sensitive and backlit OSC CMOS. Reduced it is 1.1 arcsec native 0.78 ...

/Yves

 

Link to comment
Share on other sites

28 minutes ago, vdb said:

I've been reading diagonally this thread, so I'm confused now 😄 

I'm on the verge of reducing my Tec 140 with the AP reducer to F5, I have a 3.7 micron camera ... or do I keep it at the native focal length ... The camera is sensitive and backlit OSC CMOS. Reduced it is 1.1 arcsec native 0.78 ...

/Yves

 

Depends on your sky

Link to comment
Share on other sites

1 hour ago, vdb said:

I've been reading diagonally this thread, so I'm confused now 😄 

I'm on the verge of reducing my Tec 140 with the AP reducer to F5, I have a 3.7 micron camera ... or do I keep it at the native focal length ... The camera is sensitive and backlit OSC CMOS. Reduced it is 1.1 arcsec native 0.78 ...

/Yves

 

Beware of internal reflections. One of our guests asked Yuri Petrunin if he was going to introduce a focal reducer for the 140. He replied, 'No, buy another telescope.' You have been warned!!!

Olly

Link to comment
Share on other sites

I have been reading/skimming through this now rather long thread and I may be wrong but I think a conclusion could be:

- Use a focal reducer to get your whole object into the field of view.

- Use a focal reducer to get a better match between resolution and pixel size (i.e close to 1"/pix for most of us) and thereby win in S/N ratio.

- Do not use a focal reducer if you have your whole object in the field of view and you are around 1"/pix with scope and camera as it is.

Right?

 

Link to comment
Share on other sites

6 minutes ago, gorann said:

- Do not use a focal reducer if you have your whole object in the field of view and you are around 1"/pix with scope and camera as it is.

 

Not sure if I would go with this last one. Not many people can actually pull off 1"/px resolution. In most cases people are closer to 1.5-2"/px.

Maybe better way to say it would be:

- Don't use focal reducer if you are already at your target sampling rate and you don't want to trade resolution / image scale for less time imaging, or you want flexibility to do it later via binning and such.

 

Link to comment
Share on other sites

6 minutes ago, vlaiv said:

 

Not sure if I would go with this last one. Not many people can actually pull off 1"/px resolution. In most cases people are closer to 1.5-2"/px.

Maybe better way to say it would be:

- Don't use focal reducer if you are already at your target sampling rate and you don't want to trade resolution / image scale for less time imaging, or you want flexibility to do it later via binning and such.

 

Yes, you are right Vlad, 1"/pix would only apply on a few rare very good nights at my site with SQM 21.0 - 21.6 and variable seeing. Usually my guiding (which vary from 0.4" to 1"/pix dependent on seeing) will tell me.

Edited by gorann
Link to comment
Share on other sites

Here is comparison between reduced and unreduced data.

First comparison is 2h worth of data taken at 1"/px and then scaled down to 2"/px

vs

1h worth of data taken at 2"/px:

image.png.c8be788e27201923efc6ce05fb50aa3e.png

Left in this image is 2"/px data (1h total) and right in this image is 1"/px downsampled data (2h total). Sorry about extreme stretch - but this is linear stretch to get to noise floor to be able to actually asses if there is any difference.

In my view 1h of 2"/px data (like taken with reducer) is almost as good as 2h of data taken at 1"/px then reduced down. I can tell that 2h of data reduced in fact has a bit less noise (right part of the image looks like it has a bit smoother background).

Here is same image with proper (yet basic) histogram stretch in Gimp:

image.png.844086c82fc59f7ff228f7b08370f097.png

Histogram stretch to my eye confirms above.

1h of 2"/px is almost as good as 2h of 1"/px downsampled to same size. It follows that 2h of 2"/px will beat 2h of 1"/px downsampled - or in another words Reducer wins over non reduced then downsampled to the same size. But we sort of already knew this from my previous comparison of binning vs downsampling.

Let's look what happens when we upsample 1h of 2"/px image to match the resolution of 2h of 1"/px image. I have to say that for data above ideal sampling rate is slightly less than 2"/px, therefore we can expect some sharpness loss - but I think it will be minimal and we probably won't be able to tell.

Here is linear stretch (again extreme to hit noise floor). Left part of the image is 1h of 2"/px upsampled and right side of the image is 2h of 1"/px.

image.png.08335f91807be22798eaacce4bb5cbaa.png

You will notice that noise in upsampled image is more grainy - there simply is no "information" that makes up fine grained noise, but noise levels are about the same, or rather we need histogram stretch to see which noise will start showing first.

image.png.dd4404b7eb82e104cbe94a9fbc96bb47.png

Not sure what to make out of this one. I think I can see the difference in size of noise grain, but if I did not tell you that this image is made partly out of 1h of data and partly of 2h of data and that half of it is "shot" at twice lower sampling rate, would you be able to tell?

  • Like 1
Link to comment
Share on other sites

3 hours ago, vlaiv said:

Here is comparison between reduced and unreduced data.

First comparison is 2h worth of data taken at 1"/px and then scaled down to 2"/px

vs

1h worth of data taken at 2"/px:

image.png.c8be788e27201923efc6ce05fb50aa3e.png

Left in this image is 2"/px data (1h total) and right in this image is 1"/px downsampled data (2h total). Sorry about extreme stretch - but this is linear stretch to get to noise floor to be able to actually asses if there is any difference.

In my view 1h of 2"/px data (like taken with reducer) is almost as good as 2h of data taken at 1"/px then reduced down. I can tell that 2h of data reduced in fact has a bit less noise (right part of the image looks like it has a bit smoother background).

Here is same image with proper (yet basic) histogram stretch in Gimp:

image.png.844086c82fc59f7ff228f7b08370f097.png

Histogram stretch to my eye confirms above.

1h of 2"/px is almost as good as 2h of 1"/px downsampled to same size. It follows that 2h of 2"/px will beat 2h of 1"/px downsampled - or in another words Reducer wins over non reduced then downsampled to the same size. But we sort of already knew this from my previous comparison of binning vs downsampling.

Let's look what happens when we upsample 1h of 2"/px image to match the resolution of 2h of 1"/px image. I have to say that for data above ideal sampling rate is slightly less than 2"/px, therefore we can expect some sharpness loss - but I think it will be minimal and we probably won't be able to tell.

Here is linear stretch (again extreme to hit noise floor). Left part of the image is 1h of 2"/px upsampled and right side of the image is 2h of 1"/px.

image.png.08335f91807be22798eaacce4bb5cbaa.png

You will notice that noise in upsampled image is more grainy - there simply is no "information" that makes up fine grained noise, but noise levels are about the same, or rather we need histogram stretch to see which noise will start showing first.

image.png.dd4404b7eb82e104cbe94a9fbc96bb47.png

Not sure what to make out of this one. I think I can see the difference in size of noise grain, but if I did not tell you that this image is made partly out of 1h of data and partly of 2h of data and that half of it is "shot" at twice lower sampling rate, would you be able to tell?

Not sure this is a great example. I Wouldnt say the amount of data is really enough in either case.  But I will say this, with the ability to make 1 hour of data look like that, why don’t we see more of your pics. Amazing. I can’t get that That level  of detail in 8 hours of data and your sky is worse than mine

Link to comment
Share on other sites

17 hours ago, ollypenrice said:

Adam and Vlaiv in particular, how about this caveat for the diagrams I posted?

These diagrams assume workable sampling rates for the camera in all cases. If the longer focal length introduces over-sampling it will add no resolution and the imager would benefit from the reducer since it will add speed without reducing resolution. An alternative would be to bin the data when oversampled.

I'm left wondering if we don't need a new unit. Arcseconds per pixel is fine for resolution but don't we really need something indicating flux per pixel? This might be indicated by something like square mm of aperture per pixel, no? 

Olly

 

At the risk of sounding arrogant, this is conceptually a very difficult subject for most to grasp as we are proving here, not least because most people are not willing to spend the amount of time considering these matters that we do.  Just have a think about number of concepts you need to grasp in order to answer the seemingly simple question "should I use a reducer". 

So I go back to my original point, there is no myth only people who understand optics and those who do not understand optics. If you are sufficiently interested to consider and research the issue then eventually you will not need a rule of thumb or caveats you will just understand it. If you don't fall into that category then I would simplify things into a number of questions that anyone should ask themselves to determine if they should use a reducer or not.

For example:

Should I use a reducer:

 

1) Is my F-ratio F6 or lower?

2) Is my Pixel scale greater than  1" / Pix?

3) Does my target fit into my FOV.

4) Am I able to consistently guide to an RMS error of less than my Pixel scale (ideally 50% of my pixel scale). 

5) Is the Dawes limit of my telescope significantly larger than my Pixel scale?

 

 

If you answer No to any of the questions above you will probably benefit from using a reducer so long as doing so does not result in your image scale moving to exceed 2" / pix when galaxy imaging or 3" / pix when imaging diffuse nebula, if you answer yes to all of the above then there is no pressing need to use a reducer but you may still benefit from doing so in some situations. 

 

Will that cover everything? - no,

Will it help someone understand why they should or should not use a reducer? - no

Will it help someone who wants to image without having to become an expert in optics and sensor performance decide if a reducer is appropriate for their setup or not? - Maybe

 

Threads like this are useful to help the minority like Rodd who want to understand the science / detail and fully optimize their imaging, but when they come to truly understand the factors involved they wont need the rule of thumb or the caveats they can just work it out for themselves.  But the majority who don't want to dive that deeply into this and just want to know they are in the right ball park are best off forgoing the diagram and asking themselves the above questions. 

Bottom line: I know many people who seem to manage perfectly well taking great images without understanding anything more than "aim for between 1"/pix and 2" / pix while also choosing a scope and sensor combination to provide a FOV suited to your target size", so the only thing to add is that if a focal reducer helps you achieve that then crack on and use a focal reducer.  Don't use one just to get a lower F-ratio at the expense of all other considerations  in the misguided belief that it will lead you down the path towards some kind of imaging utopia or you will probably be disappointed. 

Just my opinion. 

Adam

Edited by Adam J
  • Like 1
Link to comment
Share on other sites

5 hours ago, Adam J said:

At the risk of sounding arrogant, this is conceptually a very difficult subject for most to grasp as we are proving here, not least because most people are not willing to spend the amount of time considering these matters that we do.  Just have a think about number of concepts you need to grasp in order to answer the seemingly simple question "should I use a reducer". 

So I go back to my original point, there is no myth only people who understand optics and those who do not understand optics. If you are sufficiently interested to consider and research the issue then eventually you will not need a rule of thumb or caveats you will just understand it. If you don't fall into that category then I would simplify things into a number of questions that anyone should ask themselves to determine if they should use a reducer or not.

For example:

Should I use a reducer:

 

1) Is my F-ratio F6 or lower?

2) Is my Pixel scale greater than  1" / Pix?

3) Does my target fit into my FOV.

4) Am I able to consistently guide to an RMS error of less than my Pixel scale (ideally 50% of my pixel scale). 

5) Is the Dawes limit of my telescope significantly larger than my Pixel scale?

 

 

If you answer No to any of the questions above you will probably benefit from using a reducer so long as doing so does not result in your image scale moving to exceed 2" / pix when galaxy imaging or 3" / pix when imaging diffuse nebula, if you answer yes to all of the above then there is no pressing need to use a reducer but you may still benefit from doing so in some situations. 

 

Will that cover everything? - no,

Will it help someone understand why they should or should not use a reducer? - no

Will it help someone who wants to image without having to become an expert in optics and sensor performance decide if a reducer is appropriate for their setup or not? - Maybe

 

Threads like this are useful to help the minority like Rodd who want to understand the science / detail and fully optimize their imaging, but when they come to truly understand the factors involved they wont need the rule of thumb or the caveats they can just work it out for themselves.  But the majority who don't want to dive that deeply into this and just want to know they are in the right ball park are best off forgoing the diagram and asking themselves the above questions. 

Bottom line: I know many people who seem to manage perfectly well taking great images without understanding anything more than "aim for between 1"/pix and 2" / pix while also choosing a scope and sensor combination to provide a FOV suited to your target size", so the only thing to add is that if a focal reducer helps you achieve that then crack on and use a focal reducer.  Don't use one just to get a lower F-ratio at the expense of all other considerations  in the misguided belief that it will lead you down the path towards some kind of imaging utopia or you will probably be disappointed. 

Just my opinion. 

Adam

That's a very good list of questions. I also think your last paragraph makes for an excellent summary. I think the questions might well help someone to get their head round the problem and/or help someone trying to explain it all to them. Clarity is everything.

The diagrams I did were primarily for use with our beginner guests and can be fleshed out during discussion. Like a lot of people I'm very visual in my conceptualizing so I do like a picture to set myself straight. Of course others think differently, some greatly preferring a mathematical approach. Thanks for your reply.

Olly

Edited by ollypenrice
Typo
Link to comment
Share on other sites

here are 2 fits files of the  HH.  100 300 sec subs.  One is binned 2x2 and one is not.  No processing has been done othet than crop, DBE and histogrm stretch--the histogram stretch were equivalent, or as close as I could make them.  I see no difference in the images, other than one is smaller. When you zoom in so both are the same size--they are identical except there is a little less noise in teh binned one.  Yes according to PIs SFS, the SNR is higher in the binned one--but I certainly can't tell.  So waht is the benefit?

The binned image should look like a 200 300sec sub image.  It doe not.  I am a bit embarassed to post these at all, becuase with no processing they are both inferior to my original posted 50 sub image.

I agree that Adams summary is right on.  To tell the truth, I use a reducer when I want a larger FOV.  But, I remember Olly saying that there is no benefit (a.k.a time benefit...assume quality remains the same).  My question pertains to a subset of this whole discussion....only to speed.  Becuase I will image until I have enough data to render a target "well".  My example is when imaging an already small target without teh reducer--a galaxy that is already a bit small for your liking--your pushing 1,000mm to the limit.  Any reduction in size will change the image from an image of a galaxy to an image of a region including a galaxy.  That is why upsamplig after using the reducer was introduced.  If the entire reason fro using a reducer is speed, and the extra FOV is not wanted, there is no choice but to crop out the galaxy and enlargeto the 1000mm size.  

I am not wholly convinced that the answer to whether the galaxy in the example in this reponse can be captured with the reducer in 1/2 the time is unanimous.

Rodd

unbinned

h100-C-Hist-native.fit

 

Binned

h100-c-hist-2x2.fit

 

 

 

 

 

 

 

Link to comment
Share on other sites

31 minutes ago, Rodd said:

here are 2 fits files of the  HH.  100 300 sec subs.  One is binned 2x2 and one is not.  No processing has been done othet than crop, DBE and histogrm stretch--the histogram stretch were equivalent, or as close as I could make them.  I see no difference in the images, other than one is smaller. When you zoom in so both are the same size--they are identical except there is a little less noise in teh binned one.  Yes according to PIs SFS, the SNR is higher in the binned one--but I certainly can't tell.  So waht is the benefit?

The binned image should look like a 200 300sec sub image.  It doe not.  I am a bit embarassed to post these at all, becuase with no processing they are both inferior to my original posted 50 sub image.

I agree that Adams summary is right on.  To tell the truth, I use a reducer when I want a larger FOV.  But, I remember Olly saying that there is no benefit (a.k.a time benefit...assume quality remains the same).  My question pertains to a subset of this whole discussion....only to speed.  Becuase I will image until I have enough data to render a target "well".  My example is when imaging an already small target without teh reducer--a galaxy that is already a bit small for your liking--your pushing 1,000mm to the limit.  Any reduction in size will change the image from an image of a galaxy to an image of a region including a galaxy.  That is why upsamplig after using the reducer was introduced.  If the entire reason fro using a reducer is speed, and the extra FOV is not wanted, there is no choice but to crop out the galaxy and enlargeto the 1000mm size.  

I am not wholly convinced that the answer to whether the galaxy in the example in this reponse can be captured with the reducer in 1/2 the time is unanimous.

Rodd

unbinned

h100-C-Hist-native.fit 48.33 MB · 0 downloads

 

Binned

h100-c-hist-2x2.fit 12.13 MB · 0 downloads

 

 

 

 

 

 

 

How about not doing histogram stretch and DBE? Just post original stack while linear. I can then bin it myself and show the difference both small and enlarged?

Link to comment
Share on other sites

3 minutes ago, vlaiv said:

How about not doing histogram stretch and DBE? Just post original stack while linear. I can then bin it myself and show the difference both small and enlarged?

The question is can the difference be seen under normal viewing. SFS has shown the increase in SNR.  So I already know there is a difference.  My point is it’s a difference with little value.  I certainly can’t change my work flow to include sending all my data to you in order to render an image. If the difference can’t be seen in the above, doing a pixel level analysis will be a bit pointless.  But I will post when I am able

Link to comment
Share on other sites

1 minute ago, Rodd said:

The question is can the difference be seen under normal viewing. SFS has shown the increase in SNR.  So I already know there is a difference.  My point is it’s a difference with little value.  I certainly can’t change my work flow to include sending all my data to you in order to render an image. If the difference can’t be seen in the above, doing a pixel level analysis will be a bit pointless.  But I will post when I am able

Ok, no, I was not trying to imply that I'll do some sort of "magic" and image will look better.

You indicated that you did equal histogram stretch as closely as possible, but in reality it's quite different - when blinking scaled down version of native and binned - there is quite a bit of variation in brightness - different level of stretch. What I wanted to do is "split screen" type of image while still linear - and then post those so you can apply the same level of histogram stretch as it will be one image composed out of two halves.

Here is blinking gif to show the difference in level of stretch:

Stack.gif.3548407a86ab38b8da11e4773a16e58a.gif

Link to comment
Share on other sites

16 minutes ago, vlaiv said:

Here is blinking gif to show the difference in level of stretch:

OK--thats becuase I did a manual stretch on 2 different images.  Here are 2 images--JPEGs becuase I want to reiterate  "difference observable in posted image"  Nothing has been done but crop and DBE before the bin--so identical DBE on both.  Then I cloned the image and downsample one.  Then I used the STF function stretch to take all of my personal variations out.  I have zoomed in on both of thse and can not see a difference.  I am looking forward to processing this image becuase I think I can turn it into a pretty decent image.

I also removed a bunch of subs so that the FWHM of the non binned image is 2.8, so binning the image results in a resolution to be just about perfect for 1.6x 1.59.  the stack now has 66 300 sec subs.

BTW--ironically--I cant use the blink tool unless i equalize teh size of the images--Ha! 

native

h66-native.thumb.jpg.fb46be6a6d163ce7330a34b1e81f5974.jpg

 

Binned 2x2

H66-2x2.thumb.jpg.3124f3d52691fe90124143731618d874.jpg

Edited by Rodd
Link to comment
Share on other sites

46 minutes ago, Rodd said:

Also--I know it wont work magic--but doubling teh data should dramatically reduce noise--I dont see that

I would not call it dramatic. It depends on how much things that you want to present in the image - Signal, is over the noise in the image.

Doubling the amount of the data has rather straight forward consequence - it improves SNR by factor of x1.41... (square root of two).

It will always have that affect regardless what the SNR of original image is. If you have image that has SNR of let's say 30, you will end up with image that has SNR of ~43. Visually there will be less difference then for example the case where your original image has SNR of 4 and you increase that to SNR of 5.6.

In relative terms it is the same increase in SNR, but if your SNR is already high - visually it will make less difference, while if SNR is low to start with, such improvement can be considerable. It can even pull the data from "unrecognizable" region, into "starting to recognize features" region.

Here is example of low base SNR:

image.png.8dd84c12d9e1ad7f5a4243c87bd6b858.png

Here is example of higher base SNR:

image.png.11a06efc68f9eb056a604786a72e1d0e.png

And here is the same image as above (higher base SNR) with a bit different linear stretch:

image.png.280abd04f6f991c5488254b34c4c739b.png

This goes to show that doubling amount of data can produce different results - based on base SNR and also on the level of processing / stretch.

In first case it makes unreadable text almost readable (it is easier to figure out what it says in right image). In second example target is rendered the same, difference is only in quality of the background. In third example if data is carefully stretched - you can see the text and background looks almost the same - in fact they look like there is almost no difference at all.

And in all three cases we used same increase in amount of data - doubling the data.

Link to comment
Share on other sites

 

55 minutes ago, vlaiv said:

Here is example of low base SNR:

Thats a very nice demonstration.  No wonder I need so much data if doubling only improves Signal by 1.41x.  that is why I got the FSQ for F3 imaging--thinking the F3 would speed things up---IT DID NOT

On the other hand....here is a new situation.  I processed the unbinned image as best I could.   The 66 sub image with the 2.8FWHM.  However, I did not like the ASI1600 reflections, so I decided that I would turn this into an image of just the horsehead.  So I cropped it appropriately and pollished it up.  However, Astrobin portrayed this crop as very large--even when just normal vieing (not on public page yet).  So large that noisy reeas stood out.  So I binned 2x2 just to reduce the size on the screen.  Now this bin was done on final image (still in XISF format--not JPEG).  In this case would a straight downsample have been more appropriate.

I still think the image needs way more data--I very much regret not using 100 subs--but I felt the fine detailed suffered from the 3.5 FWHM.  Perahps this is a case where I need to use the reducer AND bin--even if I have to enlarge by cropping

unbinned

Cropd-3.thumb.jpg.4c34a2f74a10e75632665dd015a14786.jpg

Bin 2x2

Cropd-3-2x2.jpg.bef13c95fdd174617cf989cd43b2c31c.jpg

Link to comment
Share on other sites

Actually--twist and turns.  I decided to use the 100 sub stack, despite the FWHM, and after crop I did not bin 2x2--just reduced size a bit for better viewing.  this is a much stroger image with respect to noise and ballance.  Resolution wise I dont see much difference between FWHM 3.5 and 2.8, even though that seems absurd.h100-Crop4down2.jpg.acccb393fe4ab699f281203996dcd369.jpg

Edited by Rodd
Link to comment
Share on other sites

  • 10 months later...

Very interesting thread, I'm in the process of deciding whether to buy the Riccardi x1 or  0.75 flattener (or both). I have a personal view on this myth but the discussion on sampling got me thinking.

My  day  job  is IT , I've been involved  in many subjects but currently back to deep learning and video. I had been wondering  about super resolution and was wondering if anyone had applied it  to astro imaging (I  was considering doing it myself).

Anyway, a quick google  shows some clever folk  already have a nifty £29 photo editing tool for mac. This is from  the x2 bin, I could have lifted  lifted the shadow a lot more  but I  prefer more contrast

I applied super res plus machine learning denoise to a tiff  exported from the fits, no PI or photoshop, just 30  mins  of play with demo

h100-c-hist-2x2v.thumb.jpg.0567597ffff3adb5a51a0c27afee4471.jpgh100-c-hist-2x2.thumb.png.7be98198510c92af393a4582ba4a1eb2.png 

 

 

Could AI help with  upscaling images captured by a reducer?

 

 

 

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.