Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

Optical resolution in DS imaging.


Recommended Posts

What are your thoughts? I've been comparing a stack of M101 images taken at 0.93 arcseconds per pixel (call it Stack 1) with another stack (Stack 2) taken at 0.92 arcseconds per pixel. A trivial difference, I think we'll agree. The resolution of the Stack 1 image is perceptibly inferior. But...

In the case of Stack 1 the optical resolution of the telescope was 1.09 arcseconds, so inferior to the resolution of the camera.

In the case of stack 2 the optical resolution was 0.8 arcseconds, so slightly superior to the resolution of the camera.

None of this strikes me as surprising. Indeed it's exactly what I'd expect.

Agree? Disagree?

Olly

Link to comment
Share on other sites

  • Replies 111
  • Created
  • Last Reply
50 minutes ago, Demonperformer said:

From a position of total ignorance I would expect the limitation to be the worst figure. So 1.09 (1) vs 0.92(2). As you say, no surprise that stack 1 is (15%?) inferior.

My thinking also, but I'm not well up on sampling theory.

Olly

Link to comment
Share on other sites

37 minutes ago, ollypenrice said:

What are your thoughts? I've been comparing a stack of M101 images taken at 0.93 arcseconds per pixel (call it Stack 1) with another stack (Stack 2) taken at 0.92 arcseconds per pixel. A trivial difference, I think we'll agree. The resolution of the Stack 1 image is perceptibly inferior. But...

In the case of Stack 1 the optical resolution of the telescope was 1.09 arcseconds, so inferior to the resolution of the camera.

In the case of stack 2 the optical resolution was 0.8 arcseconds, so slightly superior to the resolution of the camera.

None of this strikes me as surprising. Indeed it's exactly what I'd expect.

Agree? Disagree?

Olly

I think that the critical item of missing information here are the seeing conditions that resulted in the generation of stack 1 and stack 2 images.  I don't believe that the diffraction limits of the two scopes would be a significant effect since you are not sampling at a sufficient rate with either of the two cameras.

To explain: whilst there's a lot of expert disagreement on what the critical sampling rate is, varying between x2.4 to x3.5 times. If you take the 1.09 optical resolution telescope, then you need to sample at somewhere between 1.09/2.4 to 1.09/3.5 eg between 0.45 and 0.31 arc seconds per pixel - since you are not near this, the optical resolution of the telescope would be an insignificant effect compared to the seeing conditions. So, I'd expect seeing to dominate the results, rather than the diffraction limits of the telescopes.  To me, it seems likely that stack 1 would have been gathered in slightly better seeing conditions than stack 2. The way to confirm this is by going to the individual sub frames of the two stacks and measuring average FWHM of each frame. 

A second possibility is that the stacking algorithms or parameters used to generate the two stacks are different. Having experimented with the various data rejection options in CCDstack, where you can see which pixels are getting rejected in which sub frames, I've found that the choice of the algorithm and the associated parameters can noticeable impact the final stacked result.

My final thought is that perhaps the mount or mounts used to gather stack 1 and stack 2 behaved slightly differently in acquisition of stack 1 and stack 2. 

Alan

Link to comment
Share on other sites

To get the most out of the two optical set-ups, you need to sample at much higher sampling frequencies. Seeing and guiding errors will add to the effective point-spread function, so whereas the optics might make a very slight difference, it is hard to judge what, if anything the effect of differences in resolving power are. Only if the stacks where obtained at the same time from the same location on the same mount (assuming no differences in flexure between the optical systems, i.e. guiding is identical), and with the same optical band (seeing is wavelength dependent), could we distinguish the effect of optical resolution from all the other factors

Link to comment
Share on other sites

As Demon says, the camera sensor can't collect data at a higher resolution than the optics, I'd have thought? And, yes, the atmosphere will be limiting anyway. I guess you could try and measure the actual resolution in each of the images by some means?

Louise

Link to comment
Share on other sites

7 minutes ago, michael.h.f.wilkinson said:

To get the most out of the two optical set-ups, you need to sample at much higher sampling frequencies. Seeing and guiding errors will add to the effective point-spread function, so whereas the optics might make a very slight difference, it is hard to judge what, if anything the effect of differences in resolving power are. Only if the stacks where obtained at the same time from the same location on the same mount (assuming no differences in flexure between the optical systems, i.e. guiding is identical), and with the same optical band (seeing is wavelength dependent), could we distinguish the effect of optical resolution from all the other factors

Thanks Michael. It is impossible to know the effects of the seeing in comparing the stacks from different nights but I believe we can ignore guiding because both mounts (Mesus) were guiding with an RMS well below half of the imaging scale. Filters were luminance of the same make.

From a theoretical point of view, would you expect the optical resolution to impose a limit on the final image when it is poorer than the pixel scale resolution? I guess that's my question.

Olly

Link to comment
Share on other sites

15 minutes ago, Thalestris24 said:

As Demon says, the camera sensor can't collect data at a higher resolution than the optics, I'd have thought? And, yes, the atmosphere will be limiting anyway. I guess you could try and measure the actual resolution in each of the images by some means?

Louise

Apparently one should sample at 2.5 - 3x the sampling interval of the smallest resolvable feature. In practice the latter will depend on the seeing, the optics, and the mechanics so won't necessarily be the same as the theoretical value. I don't know how noticeable small variations from the ideal values actually make. It's not a problem I ever face here in Glasgow....

Link to comment
Share on other sites

22 minutes ago, alan4908 said:

I think that the critical item of missing information here are the seeing conditions that resulted in the generation of stack 1 and stack 2 images.  I don't believe that the diffraction limits of the two scopes would be a significant effect since you are not sampling at a sufficient rate with either of the two cameras.

To explain: whilst there's a lot of expert disagreement on what the critical sampling rate is, varying between x2.4 to x3.5 times. If you take the 1.09 optical resolution telescope, then you need to sample at somewhere between 1.09/2.4 to 1.09/3.5 eg between 0.45 and 0.31 arc seconds per pixel - since you are not near this, the optical resolution of the telescope would be an insignificant effect compared to the seeing conditions. So, I'd expect seeing to dominate the results, rather than the diffraction limits of the telescopes.  To me, it seems likely that stack 1 would have been gathered in slightly better seeing conditions than stack 2. The way to confirm this is by going to the individual sub frames of the two stacks and measuring average FWHM of each frame. 

A second possibility is that the stacking algorithms or parameters used to generate the two stacks are different. Having experimented with the various data rejection options in CCDstack, where you can see which pixels are getting rejected in which sub frames, I've found that the choice of the algorithm and the associated parameters can noticeable impact the final stacked result.

My final thought is that perhaps the mount or mounts used to gather stack 1 and stack 2 behaved slightly differently in acquisition of stack 1 and stack 2. 

Alan

Yes, this is the heart of my question. Thanks for your succinct reply. We do have the ability to run parallel tests on the same night and should really do so at some point. As I said in my reply to Michael, I believe both mounts exceed the requirements of the test. Of course, another variable is the quality of the telescope itself but there is no way round that. It has to be a different telescope because that is the point of the test.

The trouble is, I'm just not convinced that the theory is working.

While FWHM is a good indicator of seeing quality, my 'end product' - and that which interests me in doing these tests - is perceptible detail in finally processed deep sky objects. This is not a scientifically quantifiable commodity but it is something which certainly exists as a perception and on which we would all probably agree. So far as I'm aware nobody is yet posting images taken with very small scopes and very small pixels which match those taken in much larger scopes and yet this should now be possible. That's assuming the seeing to be the limiting factor and the sampling theory correct. If we put a 2.4 micron pixel camera in an 80/480 refractor we have a 1.5" optical resolution which won't become the limiting factor until we are sampling between 0.63"PP and 0.42"PP. Since we are imaging at 1"PP the optical resolution should, therefore, be irrelevant. On a night of 1 arcsecond seeing this setup should, according to theory (or my interpretation of it!) match literally anything else? Say a 350mm with 0.33" resolution sampling at 0.64"PP. But does anybody really believe this?

In experiments I've been able to replicate the 'perceptible detail' of a 350mm image in one captured with 140mm of aperture but, in both cases, the optical resolution exceeded the pixel scale. So far, matching this with with a smaller scope and smaller pixels trying to resolve below the optical resolution of the optics has eluded us. And, unless I've missed it, it has eluded everyone else as well. Mind you, I might easily have missed it...

Olly

Link to comment
Share on other sites

If term optical resolution means one of commonly quoted resolving power criteria for aperture size - Dawes limit / Rayleigh criteria, then it's role in DS imaging goes like this:

This criteria is usually stated in terms of separation of two equal strength monochromatic light point sources whose distance is Airy disk radius.

Here we can begin to relate this value to other parameters in DS imaging.

For any sort of imaging where image is blurred due to parameters of acquisition system (here for astronomy I include seeing contribution) - optimum sampling rate is defined via combined PSF of acquisition system.

We can approximate combined PSF for image in terms of three PSFs: Seeing PSF, Airy PSF of optics, and guide error summary PSF. These three PSFs combine by mathematical operation of convolution. Let's put it into more understandable terms: Light from star is very much like mathematical dot - it has no radius / width (for our purposes). It first gets to atmosphere and it is distorted/blurred by seeing - blurring is another term for convolution (and there is precise mathematical definition how convolution is performed). Then it enters aperture and is again blurred with Airy pattern for a given aperture. In the end it falls on sensor - but position on sensor is determined by guide error, so it is blurred / convolved for third time - by guide error PSF.

So we see how final PSF of star gets its shape. It is important to notice that out of 3 PSFs that contribute to final PSF - each one can make final PSF larger (blurrier) by itself being larger. So poorer seeing - larger stars. Worse tracking - larger stars. And finally what is most important for this discussion - smaller aperture - larger stars.

It is important to understand that optical resolution (aperture size) is just one of three components that determine final resolution / resolvable detail. It is also important to note that it is one component that actually imposes final limit on what can be resolved (although that final limit as has been pointed out above, is quite further away in terms of resolution than resolutions that we are discussing here).

We can actually specify approximate formula that will give us characteristics of resulting PSF by knowing certain parameters (seeing in arc seconds, guide error, aperture size). From this formula it can be seen that small differences in apertures that are normally used by amateurs are usually "swamped" by other two parameters - making resulting PSF having difference even smaller than aperture difference would suggest.

So is optical resolution important? Yes it is, but to which extent? In good seeing and with good mount - it is starting to be more and more important in determining final resolved detail.

By the way - determining sampling resolution based on optical resolution alone is not correct (as we have seen above) and determining sampling resolution from final PSF is good approach, but people seem not to understand how to properly apply Nyquist criteria. Nyquist criteria is tied to frequency domain, and most people try to do it in spatial domain (like using Airy disk size). For well defined functions (like Airy pattern and Gaussian PSF) there is correspondence between spatial features (like Airy disk radius, and Gaussian sigma or FWHM) and certain frequencies / frequency cutoff points - but one should derive those analytically to properly apply Nyquist criteria in terms of spatial features.

 

 

Link to comment
Share on other sites

14 minutes ago, ollypenrice said:

Yes, this is the heart of my question. Thanks for your succinct reply. We do have the ability to run parallel tests on the same night and should really do so at some point. As I said in my reply to Michael, I believe both mounts exceed the requirements of the test. Of course, another variable is the quality of the telescope itself but there is no way round that. It has to be a different telescope because that is the point of the test.

The trouble is, I'm just not convinced that the theory is working.

While FWHM is a good indicator of seeing quality, my 'end product' - and that which interests me in doing these tests - is perceptible detail in finally processed deep sky objects. This is not a scientifically quantifiable commodity but it is something which certainly exists as a perception and on which we would all probably agree. So far as I'm aware nobody is yet posting images taken with very small scopes and very small pixels which match those taken in much larger scopes and yet this should now be possible. That's assuming the seeing to be the limiting factor and the sampling theory correct. If we put a 2.4 micron pixel camera in an 80/480 refractor we have a 1.5" optical resolution which won't become the limiting factor until we are sampling between 0.63"PP and 0.42"PP. Since we are imaging at 1"PP the optical resolution should, therefore, be irrelevant. On a night of 1 arcsecond seeing this setup should, according to theory (or my interpretation of it!) match literally anything else? Say a 350mm with 0.33" resolution sampling at 0.64"PP. But does anybody really believe this?

In experiments I've been able to replicate the 'perceptible detail' of a 350mm image in one captured with 140mm of aperture but, in both cases, the optical resolution exceeded the pixel scale. So far, matching this with with a smaller scope and smaller pixels trying to resolve below the optical resolution of the optics has eluded us. And, unless I've missed it, it has eluded everyone else as well. Mind you, I might easily have missed it...

Olly 

I think that you are expecting too much of a visual difference in detail depending on resolved "detail".

What our eye perceives as detail is composed out of large number of frequencies. Cutting off or attenuating high frequencies will not make feature/detail disappear. It will just make it more blurry - or it will loose contrast. By doing non linear stretch of original linear data - you are altering contrast. So you are already modifying something that represents detail loss - modification that can either have positive or negative impact on perceived sharpness (I honestly have no clue how it behaves, it is very broad subject and I've not even touched it - so can't offer any insight there).

I'll try to do some examples in a bit to show you what is actual detail lost and under which circumstances it is perceivable to human eye as degradation.

Link to comment
Share on other sites

2 minutes ago, vlaiv said:

I think that you are expecting too much of a visual difference in detail depending on resolved "detail".

What our eye perceives as detail is composed out of large number of frequencies. Cutting off or attenuating high frequencies will not make feature/detail disappear. It will just make it more blurry - or it will loose contrast. By doing non linear stretch of original linear data - you are altering contrast. So you are already modifying something that represents detail loss - modification that can either have positive or negative impact on perceived sharpness (I honestly have no clue how it behaves, it is very broad subject and I've not even touched it - so can't offer any insight there).

I'll try to do some examples in a bit to show you what is actual detail lost and under which circumstances it is perceivable to human eye as degradation.

And this goes to the heart of my doubts. Thanks. It is, above all, in the processing of the data that I feel unconvinced by all the theory I've read about resolution.

Olly

Link to comment
Share on other sites

Maybe also other characteristics of the sensor other than pixel size come into the equation? In the case of (brighter) stars, 'star bloat' also? Processing too, indeed. And what does resolution practically mean in the case of distributed objects i.e. nebula? I'm going for a walk now... :)

Louise

Link to comment
Share on other sites

A perfectly focused point source and a 'perfect image' should show an airy disk for every star, regardless of its brightness.

Long exposure images show each star as a round shape, nothing like an airy disk, whose size is related to its brightness but rather a 3D Gaussian curve. The brighter the star, the brighter the centre of the 'curve' but also the brighter it's extremities (which is why ALL stars bloat when you stretch an image).

The same must apply to any small details.

To see an airy disk, you either need to use short exposures (lucky imaging) or adaptive optics or be above the atmosphere.

... just seen @vlaiv's post about point spread functions - he got there first.

Link to comment
Share on other sites

One observation, my 'luck imaging' planetary and lunar images taken with a 150mm scope resolve details at the the theoretical resolution of the scope. Easily done with lunar images, see comments at the bottom of this post:

(Sadly) comparing my 6" lunar and planetary images with those taken in 9.25", 11" and 14" scopes clearly show how they can resolves a great deal more detail. Note that an image with twice the spatial resolution appears four times as detailed to the eye.

Link to comment
Share on other sites

15 minutes ago, Stub Mandrel said:

Note that an image with twice the spatial resolution appears four times as detailed to the eye.

Yes, but certainly in DS imaging there's what I've heard called 'empty resolution.' In comparing 14 inch images at 0.64"PP with 140mm images at 0.9"PP I've seen a fair bit of it. If I critically examine the larger image at full size I find it contains no identifiable features absent from the smaller, so it's bigger without being more detailed.

Olly

Link to comment
Share on other sites

1 hour ago, Thalestris24 said:

Maybe also other characteristics of the sensor other than pixel size come into the equation? In the case of (brighter) stars, 'star bloat' also? Processing too, indeed. And what does resolution practically mean in the case of distributed objects i.e. nebula? I'm going for a walk now... :)

Louise

Star bloat is not related to sensor. It is in fact combination of processing artifact and fact that star source represents total / combined PSF of optical system (including seeing) - which is very similar to Gaussian PSF (very good approximation).

What's processing have to do with that? Here is a simple explanation: it depends on SNR and level of stretch. There are other artifacts that contribute / look like "classical" star bloat, although are different in nature - like reflection halos, or atmospheric vapor scattering (similar to moon halos that we sometimes get).

Look at these for explanation:

image.png.0505d0196d8eb9a04a2dbd298982039a.png

image.png.4f5c0d11269bbfa3f303c69c6bc542a4.png

Now I'm going to take an image and stretch it linearly (with some clipping to emphasize the point) - and make same stars look both tight, bloated, and extremely bloated:

image.png.8b79f9631c5d148afd68fb37b4de45ad.png

Why is it visible more in brighter stars? Because there is more light so better SNR, and gaussian levels in "wings" are higher (although FWHM is roughly the same) - so when you stretch, you will easily clip into "wings" of gaussian - making star look big.

Link to comment
Share on other sites

50 minutes ago, vlaiv said:

Star bloat is not related to sensor. It is in fact combination of processing artifact and fact that star source represents total / combined PSF of optical system (including seeing) - which is very similar to Gaussian PSF (very good approximation).

What's processing have to do with that? Here is a simple explanation: it depends on SNR and level of stretch. There are other artifacts that contribute / look like "classical" star bloat, although are different in nature - like reflection halos, or atmospheric vapor scattering (similar to moon halos that we sometimes get).

Look at these for explanation:

image.png.0505d0196d8eb9a04a2dbd298982039a.png

image.png.4f5c0d11269bbfa3f303c69c6bc542a4.png

Now I'm going to take an image and stretch it linearly (with some clipping to emphasize the point) - and make same stars look both tight, bloated, and extremely bloated:

image.png.8b79f9631c5d148afd68fb37b4de45ad.png

Why is it visible more in brighter stars? Because there is more light so better SNR, and gaussian levels in "wings" are higher (although FWHM is roughly the same) - so when you stretch, you will easily clip into "wings" of gaussian - making star look big.

I think I just meant that brighter stars look bigger - as above! (There must be a minimum fwhm which can't be zero?) :) I was thinking that if you looked at a magnified portion of a sub and found two very close stars that are just resolvable then their separation would give an indication of true resolution. You could look up (planetarium?) what their actual separation is supposed to be. You could repeat across a particular image and average over several subs (without processing). Then repeat on stacked, processed image. I imagine the measured resolution would vary between targets, different parts of the sky, different nights/seeing etc. But if you did quite a few you could get some mean values and compare them to the theoretical ones.

Link to comment
Share on other sites

33 minutes ago, ollypenrice said:

Yes, but certainly in DS imaging there's what I've heard called 'empty resolution.' In comparing 14 inch images at 0.64"PP with 140mm images at 0.9"PP I've seen a fair bit of it. If I critically examine the larger image at full size I find it contains no identifiable features absent from the smaller, so it's bigger without being more detailed.

Olly

Now, we are talking about 14" vs 5.5" scope. Critical planetary resolution will be x2.5 times higher. But let's assume average guiding and average seeing (0.5" RMS and 1.2" FWHM) and see how much difference in "resolution" there is going to be (note that x2.5 higher resolution does not mean x2.5 more detail - I'll give examples what loss of resolution looks like).

For these two scopes fwhm of star would be 2.7" (for 14" scope) and 3.18" for 140mm scope, or x1.18 - much less than for critical resolution - this is because at these aperture sizes - airy disk term is not dominant one under proposed guide error and seeing.

Ok, now back to showing the difference in resolution. First I'm going to show that x2.4 of airy disk radius is critical resolution (not x2 or x3) and set framework for examining actual detail loss (instead of visual inspection).

Let's create detailed image and produce three images - one sampled at x2 of airy disk radius, second at x2.4 and third at x3.

This is simulation for 200mm scope - 1.28" airy diameter, sampling resolutions of: 0.32"/pixel, 0.2667"/pixel, and 0.2133"/pixel

First comparison of 3 images scaled to same size (of highest resolution one at 0.2133"/pixel):

image.thumb.png.ec90d291369e4be947738d3a52999f57.png

Click on image for full resolution, from left to right: x2, x2.4, x3 sampling - even here it is obvious that differences are minute when looking by eye.

So how do we actually compare the difference between the three? - by using actual difference in pixel values, here is example:

image.png.9ab24e0a1d6661fd72eaa1a517f3ef2c.png

Left is difference between x2 and x3 and right is difference between x2.4 and x3 sampling. This is very small / stretched difference, but notice that level in right image is much smaller, and in fact, difference in right image is more due to resizing artifacts (being non integer multiplier from x2.4 to x3 and using cubic interpolation) - which  shows as bigger difference away from the center - in center there is in fact no difference other than calculation error - while with x2 error is present all over the frame (here too there are some resizing artifacts, and in fact difference is less than it shows in picture - but it is present all over the image unlike second one).

Just to make it clear what is the level of error between x2.4 sampling and x3 sampling:

image.png.f9ccbec90ffb6c7cfc59b905e780250f.png

this is histogram of central section (one the least affected by resizing artifacts). So standard deviation of difference is 0.0009, while signal level is close to 0.7 in brightest parts. So its at least 1000:1

This also shows one important thing: difference in details is not in missing letters - it is in sharpness of letters, or "readability of text".

Link to comment
Share on other sites

I am in agreement with Vlaiv that the visual perception of resolution is only subjectively related to the sampling (unless it is grossly under-sampled).

The Nyquist sampling and reconstruction criteria are never met in practice with the sampling ( e.g. the detector is finite ) and signal reconstruction is never even attempted in standard image processing software.

We rely on the eye/brain to do the reconstruction by viewing the image at a distance to avoid us seeing the pixellation. 

It has been shown for spectra  https://arxiv.org/pdf/1707.06455.pdf that you need to over-sample significantly to avoid for example 10-20% wavelength errors. It also depends of the S/N you want to get.

In DS imaging I would think that the stars are the objects which exhibit the highest resolution being point sources and so prior to any processing reflect directly the PSF of the imaging system.  Although potentially a high contrast edge might be more visually obvious. However, the varying size of the stars distracts from this even if they have the same FWHM.

I would personally err on the side of caution with sampling given my experience with spectra but I reluctantly go with Olly that for eye/brian based viewing what you see is what you appreciate regardless of scientific measurement.

Regards Andrew

Link to comment
Share on other sites

23 minutes ago, vlaiv said:

This is simulation for 200mm scope - 1.28" airy diameter, sampling resolutions of: 0.32"/pixel, 0.2667"/pixel, and 0.2133"/pixel

 

24 minutes ago, vlaiv said:

from left to right: x2, x2.4, x3 sampling

Vlaiv, I must be being thick but I thought that sampling 1.28" at x2 = 0.64"/pixel etc. What am I doing wrong / misunderstanding?

Regards Andrew

Link to comment
Share on other sites

1 minute ago, andrew s said:

 

Vlaiv, I must be being thick but I thought that sampling 1.28" at x2 = 0.64"/pixel etc. What am I doing wrong / misunderstanding?

Regards Andrew

Ah, yes, I often confuse my self with these :D (need to constantly check what I'm working with) - airy disk diameter, airy disk radius - 1.28" is airy disk diameter, and x2, x2.4 and x3 are in relation to airy disk radius - so in reality x4, x4.8, x6 of airy disk diameter (or 0.64 / 2 = 0.32)

Link to comment
Share on other sites

Really interesting stuff and thank you to vlaiv for the detailed explanations, but I think I'm going to need to come back and read this again with a fresh brain :)

James

Link to comment
Share on other sites

19 minutes ago, andrew s said:

The Nyquist sampling and reconstruction criteria are never met in practice with the sampling ( e.g. the detector is finite ) and signal reconstruction is never even attempted in standard image processing software.

Actually I believe there is reconstruction based on Nyquist sampling in terms of enlarging image: Lanczos resampling.

With Nyquist and point sampling - sampled signal contains original signal up to critical frequency, but also higher harmonics of sampled frequencies - in frequency domain it has exact same frequency response up to critical frequency repeating past critical frequency. In order to restore "true sampled signal" without these higher harmonics (those actually represent pixelation when using nearest neighbor sampling in 2d sampling case) - one needs to use appropriate filter - Sinc filter in this case ( https://en.wikipedia.org/wiki/Sinc_filter ) which acts as bandpass in frequency domain. Lanczos resampling uses Sinc kernel convolution when resampling - producing "true sampled signal" and the least artifacts when resampling.

There is slight problem (I'm not even sure it is a problem - it might turn out that math holds regardless of this and everything is the same) - imaging is not point sampling - it is surface integral sampling. So I'm not sure how much if any difference is there in theoretical approach - need to research that one.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.