Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

RC8 or 250PDS for Galaxies


Recommended Posts

I'm after a 'budget' orientated scope for galaxies and narrowed it down to the RC8 or 250PDS. I'm leaning towards the RC8 due to it being far more compact at the expense of being a PITA to collimate. Reduced the RC is 1120mm approx @ f5.6, whilst the 250PDS is 1200mm @f5. But the RC can be used natively at 1600mm f8. With the RC8 I would 100% have to fit a decent r&p focuser, but not sure how the focuser is on the 250PDS? Both would be used with m 294MC Pro (possibly Bin2) and EAF. Possibly might have to go to a OAG but will see. 

Any opinions?

 

  • Like 1
Link to comment
Share on other sites

I bought a 2nd hand RC6 for exactly the same purpose but by the time I added all the bits to make it a useful imaging scope.

  • Quality focuser including adaptors to make it fit,
  • Full length top dovetail for a guidescope (or I suppose a OAG).
  • Balance weights so it is not so far forward in the mount dovetail clamp it is unsafe).
  • Reducer - yes I know it says can be used without but after asking in SGL most people seemed to think I would be better off with one.

Then it starts to be not so much the budget scope you originally thought it was.
That is not to say don't get one but just something to factor in coastwise when you are looking at your budget.

Oh and I still haven't sorted the collimation out yet a year later, but that maybe more down to me not devoting enough imaging time to it last year due to the overall lack of nights without cloud cover more than an issue with the scope.

Steve

Edited by teoria_del_big_bang
Link to comment
Share on other sites

As a RC8 owner, I would say the RC8. Yes, you will probably need a new focuser - but the same will be needed for most 250mm Newtonians. I have put a Baader ST on mine which works well with the 90mm adaptor. The RC8 generally comes with a full top dovetail - I used mine with an ST80 to guide and it worked well. Also helped the balance slightly. I have changed to an OAG to reduce the overall weight, but the jury is still out on which works best. FWIW I have not needed any additional weights. Yes, it is at the end of the dovetail, but it is perfectly secure.

I have a reducer, but I normally run at native focal length and bin the data. The reducer also reduces the image circle so this seemed a better option. I do have a reducer but rarely use it as it is just another potential cause of aberration. For the money I don't think anything else competes. Yes, collimation is a bit of a faff, but I don't think it is as bad as most people think. As long as you are methodical and do it slowly, it's not too bad.

The only reason I will change it is for a large refractor - which is my intended next purchase. But it will be at least 2x the cost, if not more.

  • Thanks 1
Link to comment
Share on other sites

I use my 250px for galaxies and find it good value with a large aperture.  With my camera it gives 0.64/px resolution.  It is not a compact scope and I only now find it useable for imaging because it’s in a dome protected from the wind.  The standard focuser is ok despite the general criticisms. I’ve used it with a mono and filter wheel, and currently with a OSC, filter drawer and OAG, and it copes ok (used with a Deepsky Dad AF3 autofocuser).  I’ve had it going on 15 years now and it represents good value imho.

Ps the 250px is an old visual version of the 250PDS.

M81-HaRGB.jpg

M82-RGB-session_1-test1-bin2x2.jpg

BA0A29D0-4D0F-42B4-8DE4-247A6032FE1D.jpeg

4DC3EE4C-C0B9-4CC3-91E9-182279A41824.jpeg

3C8BE202-7D6E-4995-A667-3DDA0A7E0E25.jpeg

  • Like 1
Link to comment
Share on other sites

I've had a 250PDS for some years and have done some rudimentary imaging with it.

2 points I would make are that it's really does catch the wind, breeze even! Can't be used for imaging in exposed sites unless it's all but flat calm. This does need a good mount. My EQ6 Pro is just about up to the job I think.

Then there's collimation. I've never been fully satisfied with its colimation. It star tests fine but never looks absolutely "right" through the Cheshire. Very fiddly at f4 point something.

The focuser has been fine and I've used my Sesto Senso 2 autofocusser with it very successfully.

I recently bought a brand new Stella Lyra RC10. I'd say the focuser is less robust than on the 250PDS but, again, I had the Sesto senso working within half an hour of getting the Allen keys out. New focuser is off the bottom of my shopping list presently.

Collimation is a learning curve. Mine was quite badly off straight out of the box but a quick tweak of the primary under the night sky with an out of focus star got my stars looking fine. I'm sure it will need further tweaking but I've never been a slave to micro managed colimation.

I do have a TSRCColli in the post to help with colimation and a focus tilt adjuster ring in my shopping basket at FLO. Should I be needing that. Right now I don't think I do. At least no to meet my standards!

The reducer I got at the same time as the scope was quite expensive and turned out to need a specific adapter ring to use with the RC10, which FLO have identified and are currently sourcing.

Now my next issue is guiding. I think I need a longer f/l scope than the standard 50mm finder that came with the 250PDS. Although intial testing seems to show good guiding. Maybe look at an OAG down the road.

The EQ6 is managing with the weight of the whole thing but I'm considering upgrading that too while I have the financial opportunity. Do my living while I'm alive!

So in conclusion, both are great scopes but I've had a hankering for more focal length for a long time and the RC's seem to fit my particular requirements.

 

  • Like 2
Link to comment
Share on other sites

20 minutes ago, Paul M said:

TSRCColli in the post to help with colimation

I have one for the RC8, although due to the GSO design of the RC8 you cannot use it to collimate the primary - something that I was made aware of after purchase! To be honest it is the most expensive LED light I have ever bought - with hindsight a homemade collimation cap with an LED in it would do the same and cost a LOT less.

24 minutes ago, Paul M said:

really does catch the wind, breeze even

This is certainly an issue and one of my reasons for going down the RC8 route. I did consider a large newtonian - but the RC8 seemed a better choice. Not least because there was a lot of weight hanging off the side with the CC, filter wheel and camera. I use it on an AZ-EQ6, but it can be used on an HEQ5 so certainly more portable - I believe @vlaiv has this set up.

  • Like 2
Link to comment
Share on other sites

Yep, HEQ5 will happily carry RC8, even unmodified one. Mine is extensively modded, but maybe most important mod for RC is replaced saddle plate for longer one with surface clamping.

As far as original topic - it comes down to following: 10" Newtonian will be larger and will require coma corrector. RC8 is lighter, shorter (smaller mount will handle it), won't be wind sail - but people complain about collimation and it has less aperture. RC8 does not require any sort of corrector with 4/3 sized sensor - which is a plus.

Both of the scopes will require binning. I'd say that RC8" will require x3 most of the time on ASI294 (native / not unlocked) - with 4.63um pixel size. Natively it will give ~0.59"/px and that is oversampling. Even if you bin x2 - for 1.2"/px, that will also be over sampling 95% of the time. 1.8"/px will be good sampling rate most of the time.

250PDS at F/5 and if CC does not change FL will give 0.8"/px - and you'll want to bin that x2 for 1.6"/px

If you unlock ASI294 in full resolution with pixels of 2.31 - then you'll be able to choose better bin factor depending on seeing conditions.

Either of two scopes will give you plenty of FOV for galaxies (or should we say for most galaxies except maybe M31 and M33).

Edited by vlaiv
  • Like 2
Link to comment
Share on other sites

4 minutes ago, Clarkey said:

I have one for the RC8, although due to the GSO design of the RC8 you cannot use it to collimate the primary - something that I was made aware of after purchase! To be honest it is the most expensive LED light I have ever bought - with hindsight a homemade collimation cap with an LED in it would do the same and cost a LOT less.

I did think long and hard before ordering, but my old laser collimator of the cheshire eyepiece design was dead when I dug it out it and had a broken wire inside the potted led module so nonrepairable! Not that I ever liked it. Got better results eyeballing down a proper cheshire. So I though, maybe the TSRC job might be an improvement and with it being specifically designed for RC's... Maybe just another another expense!

My initial thought are that RC collimation gets a lot of bad press. I can see me just doing it rack of the eye, like my Newts.

  • Like 1
Link to comment
Share on other sites

I'm not sure of the weight/bulk difference between a fully loaded RC8 and an RC10 but, I can tell you that the 250PDS is a handful to carry ou to and secure in the saddle. Not overly heavy, just bulky. The RC10 is another beast altogether. The primary heavy (very heavy) OTA is, if I'm honest and finally succumbing to my 58 years, just about my handling limit and I'm generally able bodied. 

Boy, am I glad I didn't go for the 12" Cassegrain I'd been schmoozing up to during recent months!

Here is my 250PDS in action. Looks lovely and almost sleek on the EQ6, which of course it is. The top of the OTA is well over 6ft above the ground when in the home position as shown.

The MD has already broached the question of which of my 4 scopes I'm selling... 🥺 

20210904_212212.thumb.jpg.289434ea9010af15c26d363acb1d394a.jpg

 

  • Like 1
Link to comment
Share on other sites

That almost looks like a 300p on the EQ6!  I'd recommend fitting an extract aluminum bar to the top of the rings, much easier to mount and carry.  Although I carried my 250px mounted with CWs on my EQ6 out of the back double doors into the back garden regularly before I built my observatory.  I had a 300PDS which I found impossible to mount safely on my own, and required help from the wife or a friend to tighten the dovetail bolts.  I sold that scope after a year.

  • Like 2
Link to comment
Share on other sites

10 hours ago, vlaiv said:

Both of the scopes will require binning. I'd say that RC8" will require x3 most of the time on ASI294 (native / not unlocked) - with 4.63um pixel size. Natively it will give ~0.59"/px and that is oversampling. Even if you bin x2 - for 1.2"/px, that will also be over sampling 95% of the time. 1.8"/px will be good sampling rate most of the time.

I tend to disagree with that. To image galaxies and small stuff, a sampling around 0.6" / pix should actually be the goal to achieve (with a mono sensor, I have no experience with OSC cameras). For a given camera, the ideal focal length is the one that gives such sampling in bin1 (shooting in bin2 with CMOS sensors is, in my opinion, a big waste of FOV and money).

Under light-polluted and rather mediocre European skies (average seeing 1.8" to 2.5"), a sampling of 0.66"/pix gives me this with a 200/800 Newtonian:

 

Image56_registered_crop.thumb.jpg.80e14b1ff64c4305b5d0c27b1a55191a.jpg

And that:

quint_integ_X.thumb.jpg.52b9026343550d029d615dc71e71c4e5.jpg

which look adequately sampled.

If your average local seeing is really bad  (above 2.5") and  not compatible with such a sampling , in my opinion galaxy imaging would be a rather frustrating experience.

 

Regarding the scope choice, Newtonians are much easier to collimate than RCs as there is only one optically active reflecting surface, and can be done indoors with the right tools. There are also easier to tweak and improve if needed (a mass-produced newt can be turned into a premium astrograph with the right upgrades). A 200/800 is rather compact and light, but you would need a smaller pixel cameras to reach the adequate sampling (an IMX183-based model)

 

 

Edited by Dan_Paris
  • Like 4
Link to comment
Share on other sites

1 hour ago, Dan_Paris said:

which look adequately sampled.

Ok, I know something might look one way or another, but that is why we have science.

Sampling theorem is mathematical theorem with concrete proof and we have very good understanding of the physics of light and processes that go into forming an image at focal plane of the telescope. In fact - we don't have to have it, we can measure FWHM of stars in the image and that is all we need to know about resolution and needed sampling.

Above image is over sampled by at least factor of x2 and it is very easy thing to show even without explaining all the math behind this.

You can simply take a crop of your image like this:

baseline.png.081e030bc1c94a8e2efc50933727a946.png

Which contains enough detail and stars and then you can resample it to 50% of its size:

reduced.png.88e815dc45d405ad553e76c94da170b3.png

which will contain all the data in above image, although sampling rate is twice as coarse.

How can we tell that it contains all the data in the image above? We can simply enlarge that reduced version and we will get the same image as base line above - without any loss of detail:

enlarged.png.b74fa5e1db117b869b6b4268a64bafa7.png

It works although we actually did this on stretched and processed (probably even sharpened) data, although this test is best performed on linear data.

Another way to test is to examine FFT of the image:

image.png.93e9e53f2aad589e6d5acc09263f9282.png

this is log stretched FFT of the image above. All the data is centered in the middle at just a bit less than half the frequency range. There is one outer ring that is consequence of processing / sharpening and is not there in linear data. In fact - I can repeat above experiment, but this time in frequency domain. I can completely erase all the frequencies above about 4.4 pixels per cycle - like this

image.png.bcce1f6f9f85df236fcd400f87143421.png

and do inverse FFT and this is result:

image.png.983dae362f7afada3baf4bd36dd44dac.png

Again - no difference in the image (although we are processing stretched data at 8bit per channel).

So while it may seem adequately sampled to your eye - it is over sampled by at least factor of 4.4 / 2 = x2.2 or 0.66"/px * 2.2 = 1.452"/px is sampling rate for this data that will record all the information.

 

  • Like 3
Link to comment
Share on other sites

49 minutes ago, vlaiv said:

Ok, I know something might look one way or another, but that is why we have science.

Sampling theorem is mathematical theorem with concrete proof and we have very good understanding of the physics of light and processes that go into forming an image at focal plane of the telescope. In fact - we don't have to have it, we can measure FWHM of stars in the image and that is all we need to know about resolution and needed sampling.

Above image is over sampled by at least factor of x2 and it is very easy thing to show even without explaining all the math behind this.

You can simply take a crop of your image like this:

baseline.png.081e030bc1c94a8e2efc50933727a946.png

Which contains enough detail and stars and then you can resample it to 50% of its size:

reduced.png.88e815dc45d405ad553e76c94da170b3.png

which will contain all the data in above image, although sampling rate is twice as coarse.

How can we tell that it contains all the data in the image above? We can simply enlarge that reduced version and we will get the same image as base line above - without any loss of detail:

enlarged.png.b74fa5e1db117b869b6b4268a64bafa7.png

It works although we actually did this on stretched and processed (probably even sharpened) data, although this test is best performed on linear data.

 

I understand your thinking, but was still not convinced, so I did the sensible thing and tried it for myself, but extended your experiment and repeated the process three times on the same image. Resampled to half size then resampled to full size, rinse and repeat twice more in GIMP. The result was very clearly nothing like the quality of the original. So, I downloaded your first image file and converted it to TIFF to ensure no compression was being applied and repeated the process and got exactly the same result. My final TIFF image is attached below. I'd appreciate your comments, as clearly something has gone wrong. Is GIMP applying non-linear algorithms to the process or is it simply that your process is not valid? Perhaps you would be kind enough to provide details of the software you used and maybe even run the image through it several times to verify that you get a different result to me.

I've also included a PNG to save everyone having to download the TIFF.

baseline_Resample_3_times.png

baseline_Resample_3_times.tif

Edited by Mandy D
clarification
Link to comment
Share on other sites

1 hour ago, vlaiv said:

Ok, I know something might look one way or another, but that is why we have science.

The median FWHM on this image is 1.75".

According to Shannon's theorem, you should sample at least at half than that, i.e. 0.875". But since it is two-dimensional data and that the pixel are square, to get adequate resolution along the diagonal you should aim for 1.75/(2*sqrt(2))=0.62". This is also science.

 

1 hour ago, vlaiv said:

How can we tell that it contains all the data in the image above? We can simply enlarge that reduced version and we will get the same image as base line above - without any loss of detail:

How did you enlarge ? Most rescaling algorithms introduce some sharpening.

And for an accurate comparison  one should rescale  the raw files before aligning and stacking them (for the same reason that drizzle integration allows to resolve sub-pixel  details).

 

What I know experimentally is when I changed my camera from the ASI1600 (1.04"/pix) to the ASI183 (0.66"/pix), there was an obvious improvement in resolution (without changing the telescope). I have taken tens of images with both so I am pretty confident that this is not due to the atmospheric conditions.

 

Edited by Dan_Paris
  • Like 1
Link to comment
Share on other sites

16 minutes ago, Mandy D said:

I understand your thinking, but was still not convinced, so I did the sensible thing and tried it for myself, but extended your experiment and repeated the process three times on the same image. Resampled to half size then resampled to full size, rinse and repeat twice more in GIMP. The result was very clearly nothing like the quality of the original. So, I downloaded your first image file and converted it to TIFF to ensure no compression was being applied and repeated the process and got exactly the same result. My final TIFF image is attached below. I'd appreciate your comments, as clearly something has gone wrong. Is GIMP applying non-linear algorithms to the process or is it simply that your process is not valid? Perhaps you would be kind enough to provide details of the software you used and maybe even run the image through it several times to verify that you get a different result to me.

Yes, that will happen if sub optimum resampling algorithm is used.

Try using IrfanView and its resampling - choose Lanczos resampling to repeat your experiment.

Lanczos resampling is closest thing to Sinc resampling needed to fully restore properly sampled data. It is in fact windowed Sinc method (Sinc is just SinX/X function: https://en.wikipedia.org/wiki/Sinc_function - it is used to restore sampled data to actual function).

In fact - choice of interpolation algorithm is very important, and we should always use advanced algorithms if we don't want our data to suffer low pass filtering. I've written about this topic before here on SGL:

 

  • Like 1
Link to comment
Share on other sites

9 minutes ago, Dan_Paris said:

According to Shannon's theorem, you should sample at least at half than that, i.e. 0.875". But since it is two-dimensional data and that the pixel are square, to get adequate resolution along the diagonal you should aim for 1.75/(2*sqrt(2))=0.62". This is also science.

That is not Shannon's sampling theorem states.

Nyquist-Shannon theorem states that for perfect reconstruction of band limited signal one should sample at twice highest frequency component of that band limited signal.

One needs to examine signal in frequency domain and determine cut off frequency for that signal. It is related to FWHM if we assume Gaussian profile - but not as simply as "half that value".

You are correct that one can sample at higher frequency than that - and nothing wrong will happen with the signal itself as far as sampling is concerned and that is called over sampling. It has no ill effects as far as signal reconstruction is concerned. It does however have ill effects on SNR of the image as you needlessly spread light over more pixels than necessary and thus reduce signal per pixel and hence SNR suffers.

As far as diagonals of pixels are concerned - you are wrong.

Diagonal is longer than side of square and sampling twice per diagonal will be at x1.4142 longer wavelength - and longer wavelength is lower frequency.

Highest sampling rate for 2d case with rectangular lattice - is side of that rectangle. Ideal sampling pattern for 2d case is not rectangular grid but rather hexagonal one, but since we don't have sensors with hexagonal patterns - we are using pixels instead and we need to make sure that we sample with two pixels per cycle of highest frequency component in X and Y direction (twice per wavelength of that highest frequency component - or at twice as high frequency).

19 minutes ago, Dan_Paris said:

How did you enlarge ? Most rescaling algorithms introduce some sharpening.

I had sharpening turned off for Lanczos interpolation. It was simple interpolation that was used.

 

20 minutes ago, Dan_Paris said:

And for an accurate comparison  one should rescale  the raw files before aligning and stacking them (for the same reason that drizzle integration allows to resolve sub-pixel  details).

Drizzle works only on undersampled data and is probably most misused algorithm in amateur imaging.

We can also do this on linear raw sub - results will be the same.

Link to comment
Share on other sites

12 hours ago, tooth_dr said:

That almost looks like a 300p on the EQ6!  I'd recommend fitting an extract aluminum bar to the top of the rings, much easier to mount and carry.  Although I carried my 250px mounted with CWs on my EQ6 out of the back double doors into the back garden regularly before I built my observatory.  I had a 300PDS which I found impossible to mount safely on my own, and required help from the wife or a friend to tighten the dovetail bolts.  I sold that scope after a year.

I use my 300pds on my EQ6 - works fine 🙂 It is a bit fun and games mounting I admit and did take me a while to work out a technique of lifting it onto the EQ6 and getting it into the dovetail and tightening at the right balance by myself.

As another option, have you considered a C9.25 ? I have one as well as the 300pds. With the 6.3 reducer/flattener on the C9.25 you have 1300mm and tbh I find the quality I get is pretty much the same as with the 300pds.

Sure the PDS is a bit quicker at F4.9 vs F6.3 but as you say, it (or even the tiny(!) 250pds is a beast compared to an RC8 or C925.

And of course SCTs are a doddle to collimate too, so there's that.

stu

Link to comment
Share on other sites

13 minutes ago, vlaiv said:

Yes, that will happen if sub optimum resampling algorithm is used.

Try using IrfanView and its resampling - choose Lanczos resampling to repeat your experiment.

Lanczos resampling is closest thing to Sinc resampling needed to fully restore properly sampled data. It is in fact windowed Sinc method (Sinc is just SinX/X function: https://en.wikipedia.org/wiki/Sinc_function - it is used to restore sampled data to actual function).

In fact - choice of interpolation algorithm is very important, and we should always use advanced algorithms if we don't want our data to suffer low pass filtering. I've written about this topic before here on SGL:

 

Thanks for the explanation. I will read up on all that you have linked me to, as I like to understand the science behind what I am doing.

I downloaded Irfanview and ran the image through Lanczos resampling down and back up 3 times, with similar results to what I had in GIMP. I then applied a further two resamples to confirm it. The PNG file is attached. The TIFF is very similar, so I'm not posting it.

Have I got something wrong?

Apologies if this appears to be doubting what you say. I understand and accept your assertion that the data should be completely retained as the higher resolution was not necessary to begin with, but it just does not appear to be happening.

baseline_Resample_Irfanview.png

Link to comment
Share on other sites

I just did a simple experiment with the linear stacked image of the quintet.

The original image has a median FWHM of 1.75".

If I resample at 50% and then back to 100% (using Lanczos algorithm) the FWHM raises to 1.97".

 

16 minutes ago, vlaiv said:

but not as simply as "half that value".

If I take 1.8" as an ideal sampling rate, as you suggest,  stars look rather undersampled ?

 

1617407600_Capturedcrandu2023-01-0412-59-27.thumb.png.e73aaa512ba37ad806a1d33d23f0d382.png

 

Link to comment
Share on other sites

13 minutes ago, Mandy D said:

Have I got something wrong?

Your Lanczos resampled image is slightly different dimension than other two - you probably used different scaling up and scaling down factor?

But it is in any case much sharper than other example:

image.png.2076efdc7ff0494a72c60512da4d29f9.png

Left is Gimp resampled and right is Lanczos resampled one - difference is obvious. Try applying Lanczos 3 times - but keep dimensions the same instead of changing them.

16 minutes ago, Mandy D said:

Apologies if this appears to be doubting what you say. I understand and accept your assertion that the data should be completely retained as the higher resolution was not necessary to begin with, but it just does not appear to be happening.

No harm in doubting - that is the way we come to truth and true understanding. Everything must have an explanation.

Link to comment
Share on other sites

7 minutes ago, Dan_Paris said:

The original image has a median FWHM of 1.75".

If I resample at 50% and then back to 100% (using Lanczos algorithm) the FWHM raises to 1.97".

That is to be expected. For FWHM of 1.75" - adequate sampling rate is about 1.1"/px (FWHM / 1.6 is close to optimum sampling rate), so if you resize to 50% and enlarge to 100% - you will loose some data as sampling rate of 1.32"/px (half that of 0.66"/px) is equivalent of FWHM of 1.32 * ~1.6 = ~2.1" FWHM (you actually got 1.97" - but that is close enough given that 1.6 is not exact number but rounded approximation).

11 minutes ago, Dan_Paris said:

f I take 1.8" as an ideal sampling rate, as you suggest,  stars look rather undersampled ?

I can't say because you used nearest neighbor interpolation.

image.png.2bcea05f8dcd929182919cd3b7fc5705.png

This does not look like undersampled star. It has at least 5-4px in both width and height. I don't know how much data is stretched and where is FWHM - but you only need ~1.6 px per FWHM to properly sample the star.

If you want how that star signal really looks like - use appropriate interpolation.

Link to comment
Share on other sites

3 minutes ago, vlaiv said:

I can't say because you used nearest neighbor interpolation.

 

This was with Lanczos.

 

3 minutes ago, vlaiv said:

This does not look like undersampled star. It has at least 5-4px in both width and height. I don't know how much data is stretched and where is FWHM - but you only need ~1.6 px per FWHM to properly sample the star.

FWHM measurement on this rescaled image gives 1.07 pix.

 

 

Link to comment
Share on other sites

1 minute ago, Dan_Paris said:

This was with Lanczos.

I was referring to enlarged image:

image.png.58146434301f9ecdc96895697516a69c.png

this is enlarged with use of nearest neighbor interpolation - stars are not "pixelated" even when undersampled - that is artifact of interpolation algorithm.

3 minutes ago, Dan_Paris said:

FWHM measurement on this rescaled image gives 1.07 pix.

What do you use for FWHM measurement? Different software will give you different results.

Try measuring with AstroImageJ. I believe it gives most accurate results.

In any case - if your image has FWHM that is ~1.6px - that is properly sampled image.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.