Jump to content

SkySurveyBanner.jpg.21855908fce40597655603b6c9af720d.jpg

Imaging a pulsar. Is it possible with amateur equipment?


dokeeffe

Recommended Posts

I started getting curious about this and might try it next winter when M1 is high in the sky again.

The M1 pulsar's wiki page https://en.wikipedia.org/wiki/Crab_Pulsar has an image showing the pulse.

The pulsar has a V mag of about 16 and a frequency of about 30Hz. 

I think it could be possible for an amateur to image the pulse directly. I'm thinking with a good planetary camera binned 4x4, taking unfiltered ~10ms exposures and grabbing lots of them it may be possible to reconstruct its 30Hz 'light curve'

Link to comment
Share on other sites

  • Replies 34
  • Created
  • Last Reply

This is actually very interesting idea!

Provided that pulsars do pulsate in visible (I'm not sure about that, I thought it is predominantly X-ray thing), you could do short exposure, don't need to go to 10ms, 60hz would be enough to reconstruct the curve (2x sample rate of highest frequency), so 16ms is ok.

Only problem that I can see is stacking. You would need to make 60 stacks, each frame going into "designated" stack: 1st into 1st stack, 2nd to second 61st to 1st stack, 62nd to 2nd stack, 121st to 1st stack and so on (frame number modulus 60 -> stack number). This is a bit harder than it seems, since frames can be dropped, and there could be a bit of delay between frames (not ideal sampling rate). So with each frame I would record timestamp, and do stacking based on time rather than frame number. Also, you might have shift from ideal time, so I would interpolate frames - middle of the frame being reference time, and actual stacking time would determine linear interpolation between two frames (this has a benefit of lowering the noise as well).

There is also issue of aligning the frames in the stack, if V mag is 16, you will get hardly any photons per exposure from star so align routine would fail - it would not know how to align frames (no centroid or anything like that). Approach that you might use would be: have really tight guiding so there is no need to do actual alignment of frames - they are sort of aligned already, just stack them up as they are. Also for this to have a chance of working, you need to consider your sampling resolution, and consider seeing disk size - ideally you would want to have whole seeing disk + tracking accuracy hitting single pixel - this way you would get best snr per frame - whole signal is going into single pixel - no light spread and lowering of signal per pixel. But in reality it is unlikely that you will align your optics in such way that light will hit single pixel unless you really under sample. So given 2-3" seeing disk I would aim for sampling resolution of >4-5"/pixel. If you do not have suitable camera, one with large pixels or hardware binning - just bin frames in processing phase.

I don't think that there is software that can do all of the above so you would probably need either to write it yourself (if you have programming skills), or it could be joint effort - I could write a software, since I like the idea, if you are interested in collaboration of course, but first thing that we would need to do is kind of "feasibility study". My main concerns at the moment are:

1. Do pulsars pulsate in visible

2. How much data do we need in order to get acceptable SNR - to actually see variation in brightness.

Link to comment
Share on other sites

Looks like it should be possible, from the WP article:

Quote

Jocelyn Bell Burnell, who co-discovered the first pulsar PSR B1919+21 in 1967, relates that in the late 1950s a woman viewed the Crab Nebula source at the University of Chicago's telescope, then open to the public, and noted that it appeared to be flashing. The astronomer she spoke to, Elliot Moore, disregarded the effect as scintillation, despite the woman's protestation that as a qualified pilot she understood scintillation and this was something else. Bell Burnell notes that the 30 Hz frequency of the Crab Nebula optical pulsar is difficult for many people to see.[17][18]

I think the difficult bit will be that it's magnitude is 16.5 which is about the limit of what I can image from home so a real challenge for short exposures with a relatively small scope.

Link to comment
Share on other sites

Here's a video of the Crab Nebula pulsar:

Crab_Lucky_video2.gif

Taken from here, it's about half-way down the page. However, this was taken with a professional telescope (from the description it's not quite clear which one). I'm not sure how much aperture would be required to catch it in a short exposure.

However, I think its possible to listen to pulsars using amateur radio telescopes. 

Link to comment
Share on other sites

1 minute ago, Stub Mandrel said:

Looks like it should be possible, from the WP article:

 

Ok, that solves issue number one :D

If it can be identified visually - this means that variation in brightness is enough for eye to notice (well, some eyes I'm sure :D ). Not sure if we can guesstimate difference in intensity from this, but it would be beneficial for setting required SNR to actually produce reasonable curve.

16.5 star produces ~0.22 photons per second per cm2 in V band.

So let's do the math for 8" scope (that is the one I have, bigger scope would be better of course): 255cm2 of aperture, let's say 0.4 combined optical QE (camera QE, mirror reflectiveness / light loss), 16ms exposure time

would give us: ~0.375e from star per exposure.

Given the camera with ~1.5e read noise, stacking for example 10000 frames in one stack (one out of 60) would give us: signal of 3750 and read noise of 150. Shot noise would be 61, so total SNR would be ~ 23.15

Now if we assert that in order to be able to see the flicker brightness must change more than 10%, SNR of brightness difference (signal bright / signal dark)/noise is in this case 2.3, which is I would say lower bound (ideally we should aim for >5), so x4 exposures per frame.

Ok, so it is 40000 frames per stack, 60 stacks, 60 frames per second - 40000 seconds to record this - about 11 and a half hours with 8" scope - doable in two / three nights. (We really need to be careful with time calculations if we do multi session recording in order to align frames to stacks properly).

Now let's do a bit more math. 128x128 pixels sub frame, 16bit per pixel, 60x 40000 frames equals about 75GB, so storage is not such a huge issue if we work with really small FOV, just capturing the star.

I think this is feasible if my math is correct.

Link to comment
Share on other sites

8 minutes ago, Knight of Clear Skies said:

Here's a video of the Crab Nebula pulsar:

Crab_Lucky_video2.gif

Taken from here, it's about half-way down the page. However, this was taken with a professional telescope (from the description it's not quite clear which one). I'm not sure how much aperture would be required to catch it in a short exposure.

However, I think its possible to listen to pulsars using amateur radio telescopes. 

That looks great, it also answers another question - from the image we can guess that brightness change is more than 50%, so I think that we are good with just 10000 frames per stack.

Link to comment
Share on other sites

Here's a thread on Cloudy Nights. Sounds like some people have managed it by putting a rotating wheel with a gap in front of the camera synced to the pulsar's rotation, so the camera only sees part of its cycle for the duration of a long exposure. Using this technique it might be easier to image a pulsar with a slower rotation rate, depending on how bright it is.

Link to comment
Share on other sites

2 hours ago, vlaiv said:

need either to write it yourself (if you have programming skills), or it could be joint effort - I could write a software, since I like the idea, if you are interested in collaboration of course

I'm a software engineer by trade but, yes I would be interested in collaboration. I use astropy a fair bit for data reduction of photometric image data but this sounds much more challenging  :-) 

Link to comment
Share on other sites

Just had a walk and did some thinking about all of this - my math is just wrong.

The math itself is ok, but reasoning is not. We need to think this thru again.

For some reason I was looking at 1s as period for stacking - this just does not make any sense :D

There are two approaches to this. The animation Knight of Clear Skies posted got me thinking.

First approach is just "measuring" the curve, or rather fitting the curve. In theory we just need twice sample rate to achieve this - 16ms and two stacks - but this is only the case if function is periodic (mathematically speaking - it just repeats it self precisely on a period), and second - it highest harmonic is that of base frequency - no higher harmonics. But this is not really exciting thing to do - we would be fitting sine function thru two points.

Other thing that we might consider is to actually say we do not know what are harmonics of the curve or if it is simple sine, and try to establish actual curve, to interpolate it. Then we would need more points - all in range of 33ms.

In this case we should choose the number of samples in this interval that we are interested in - more samples bigger precision, but of course less exposure time, and harder to achieve good SNR. If we opt for 10 sample for example - we are talking about 3ms exposure (instead of original 16ms), and in this particular case 10 stacks.

Because we are below read noise, there is no straight forward dependence on number of frames / total duration, number of stacks.

So, while above math still holds (in terms of SNR), basic assumption on number of frames and their duration, as well as number of stacks is wrong. I will create a small spread sheet where I will put variables (like scope size, wanted temporal resolution, total imaging time) and calculate SNR and space requirements so we can play around to get better idea if it will work and under which conditions.

One more thing is crucial for this to succeed. Well, there are two approaches to this as well, but second one is really complicated, it can be done, but it will require much more computing time and clever programming.

In order for this approach to work we really need good data on period of pulsar. 30 ms or there about is just not good enough :D

Let me explain. For example we want our temporal alignment of frames to be within margin of 10% - meaning any frame in sequence that we pick should not be shifted from true position in time by more than 10%. This means for above example of 10 frames that each frame must be within 0.3ms of its true position. Now we are starting in point in time and adding, so each error will be added as well. If we record 10000 x 10 frames - this means that minimum per frame shift in time is 0.3 / 100000, and since we use 10 samples per period, true duration of period must not be more than 0.3 / 10000 ms off.  So we need true period to be precise to at least 6 decimal places!

So 30ms-ish won't do. We need something like 30.323141ms in order to make it work. Now, we don't really need to know this precise value, but in that case we need to make computer program that will search for precise period.

In basic form it would try stacking at different period lengths finding one that gives highest maximum values (if we don't get period good enough then we will stack peaks and troughs together at some point because phase shift and that will lower peak value).

 

Link to comment
Share on other sites

Just checked, it looks like we have good data on period. Wikipedia states that period is:

33.5028583ms

It does not how ever state measurement error for this, so I would guess we are good for total of about 1 million frames (split into stacks, depending on wanted resolution can be either 10x 100000 or 20x50000).

Hm, well that is not quite true. That holds only if frames are precisely next to each other, with no gaps in between. In reality there will be small gap between each frame, and if we need to shoot on different nights, that should also be taken into consideration. Time continues to run regardless if we are shooting or not so any difference between assumed and true period keeps adding up with each period that passes regardless.

I think that because of this we will be forced to do it in one single go without large gaps in between, unless we come up with a way to somehow resync periods. If we were to have a decent SNR per image this would not be necessary - we could do frequency analysis and just measure correct period (for example that is what programs for Periodic Error measurements for mounts do - given enough good samples they produce frequency spectrum).

Link to comment
Share on other sites

Hm, got interesting results :D

For my scope, best camera would be ASI290 mono version. I did checks for ASI174, ASI1600, ASI178 and ASI290.

For all but ASI178 I used QE of 0.5, and for ASI178 I used QE 0.6 - just because it is more sensitive - it is only one that uses back illuminated chip.

Calculations are done on V band, but in reality more signal would be collected if one uses reflector system without filtering (so full spectrum). I did calculations on 10, 15, 20, 30 sample points.

10 sample points is feasible - it gives ~10 SNR for 6 hours recording time. I included max frame rate to be realistic - for example 10 samples gives exposure time of 3.3ms - that is 300 fps, some cameras are unable to deliver this even on small ROI.

Here is spreadsheet in attachment, please check my math, and logic also :D

There are couple things not included in this calculation (but could be by using appropriate values), like atmospheric extinction during the 6 hour recording period (can be modeled by average mag). I also put percentage of dropped frames - that can be used to model wasted time between frames, but I took it into consideration only on natural fps, not camera limited one (my reasoning is that camera limited one will work if hardware supports it - ssd disk, good cpu/ram).

Also I used naive SNR calculation - I just assumed that measuring signal would be just adding signal from all frames and all pixels for single sample (so stacking and then summing pixel under seeing disk - ones that received light). Also changing seeing disk size does alter result of calculation - better seeing will spread light over less pixels. Tracking error / airy disk size is included in seeing disk (not separate parameters).

Conclusion, yes, 10 or even 15 samples is feasible in 6 hours, but higher sampling rate would require longer recording time.

pulsar_snr.ods

Link to comment
Share on other sites

2 hours ago, vlaiv said:

temporal alignment of frames to be within margin of 10%

 

Yes as you mentioned in the other post, there would be gaps between the frames. Due to the overhead of readout etc. I think FireCapture can timestamp the frames but I'm not sure how accurate it is. A fourier transform of the flux over time would give us the period of the pulsar.

I think any software would have to be able to reduce the raw data by 'folding' it based on the frequency of the pulsar.

PS I have a Celestron C11 +  ASI120MM (Also have a lodestarX2 and an Atik383L+ but these would be useless for this except for guiding maybe)

Derek

 

Link to comment
Share on other sites

1 hour ago, vlaiv said:

Hm, got interesting results :D

Here is spreadsheet in attachment, please check my math, and logic also :D

Just checked the spreadsheet. It looks good. SNR of 10 with a 200mm scope. That means its well within reach. I put in my scope details and got SNR of 12. Then reduced the mag to 15 (pure guess for unfiltered) and got 48!

There is software for analysis of time series variable star data (http://www.peranso.com/) but it looks like its a windows application. Might get some ideas from it though. See this post where Dave Smith used to build up a light curve of an eclipsing variable 

 

Link to comment
Share on other sites

1 hour ago, dokeeffe said:

Yes as you mentioned in the other post, there would be gaps between the frames. Due to the overhead of readout etc. I think FireCapture can timestamp the frames but I'm not sure how accurate it is. A fourier transform of the flux over time would give us the period of the pulsar.

I think any software would have to be able to reduce the raw data by 'folding' it based on the frequency of the pulsar.

PS I have a Celestron C11 +  ASI120MM (Also have a lodestarX2 and an Atik383L+ but these would be useless for this except for guiding maybe)

Derek

 

It is not the gaps that are problem. It is precision of the location of the sample within the period. Each frame would need timestamp, and yes, SER has that if I'm not mistaken (it should be appended as table at the end of the file).

What I was implying is that if we miss a period by a bit, we might end up stacking frames on wrong stack - thus lowering precision or even completely blurring out the curve.

Ok, here is an example, it's probably best to explain what I meant by example.

Let's say that we have function that has certain period - 99 ms, and that it consists out of zeros and single 1 at position 55, and we take samples 10ms long. We also assume (wrongly) that period is 100ms - thus missing true period just by 1%.

In first run we will have something like:

5ms (middle of the frame) - 0

15 ms - 0

....

55 ms - 1

...

...

95 ms - 0

So we stack frame that holds 1 onto the sixth stack.

Now we move on, and we are onto the frame that is at position 5005 (99*50 + 55) - it will have value of 1 as well but we will stack it onto

5005 mod 100 = 5ms  - 0 - being 1 stack

So instead that frame ending up on sixth stack it will end up on first.

This is because we are assuming that period is 100ms and all frames are aligned on that boundary, so periods go 100+100+100+100 .... but in reality they go 99 + 99 + 99 + 99 - each period we are missing by small same amount - and each period that small amount adds to offset so we must insure that during all summation error is so small so it does not add up to significant frame offset - it does not move it over to next frame (or previous if error is negative). So if we stack something like 100000 periods, and we don't want our frame to move more then let's say 10% of its duration - that being 0.3ms, difference between true period and assumed period must be less than 0.3ms / 100000 (each time error is this small so 100000 such small errors added will be less than 0.3ms).

So yes, we must be careful about the correct period. Also as I mentioned, I don't think we could do frequency analysis to try to pinpoint correct period and fit the data - this is simply because each individual frame will have very low signal - much below the noise, and I don't think we can pull out anything meaningful out of it to do frequency analysis. Only when stacked  (and stacked a lot of them) will we start seeing some real signal emerge from the noise - and in order to stack we must have period in order to stack onto proper sample / slot - otherwise we'll just smear the data.

But anyway, I don't think we need to concern ourselves with that, we can just use known period, it has 7 decimal digits of precision (from wiki, we might be able to find even more precise value), and wiki article is mentioning something about very small drift (order of nano seconds) - so period must be known with very high precision.

Back on equipment - I did look at ASI 120 - it has good pixel size, but rather poor read noise, and there is frame limit cap (according to ZWO website - also in usb 3.0 - 320X240@254FPS). This does not matter for stacking - there can be time between the frames, as long as we have good timestamp (to a fraction of ms) - we might miss certain samples in single period, but given enough periods we will end up with roughly the same number of frames per sample. Only impact of lower fps (than possible) is less collected samples and lower SNR (then otherwise possible).

Mind you, we don't need to shoot on native focal length of telescope - use of focal reducer will be beneficial - it will boost SNR - light will be spread over less pixels - so less read noise goes into the mix. In order to see effect - just plug in reduced focal length of telescope instead of native into spreadsheet.

As for capture software - we have a couple of options: FireCapture, SharpCap, or custom written capture application.

Latter might be interesting if we opt to do stacking "on the fly" instead creating massive capture file. This might not be a good thing - it is better to have the data for reduction later if we decide to change data reduction process.

With FireCapture and SharpCap we need to do some tests. One thing that I'm concerned about is stability of bias signal, and also level of the bias signal. When working in "video" mode as opposed to long exposure - chips tend to self calibrate and also I've seen on a couple of exposures that histogram is clipped to the left - something we want to avoid - we need all of the data (both signal and the noise) in order to have a proper stack. I'm mentioning different levels of bias signal as we have to make sure that for example during meridian flip, while we are not recording - we end up with same calibration for data, also we will need to capture bias frames and flat frames. Flat frames are very important here I think - not only because of dust and such, but also because something I've noticed - since cmos chips have amplifiers on pixel level - there is certain level of difference between pixels - this shows clearly on the flats I took for small sensor - no vignetting is present - but there is significant variation across the surface (same thing repeats between the flats).

16bit vs 8bit is issue also. I know how 12bit / 14bit ADC behaves when written to 16bit - but 8bit output uses 10bit ADC, and I think it might be clipping lower 2 bits - again we have to check if this will have an impact depending on gain - if gain is such that all the data is in top 8 bits of 10bit ADC - then cutting of two lowest bits will not be a problem, but as I've mentioned I would like to avoid quantilization noise as much as possible - ideally we would like to get electron count directly - ADU 1e.

Btw, I have RC8" and ASI185 (color), ASI178 (color/cooled), and ASI1600 mono/cooled. I also have Qhy5IILc - so same sensor as ASI120 - but in color.

I think that color sensor can be used as well without any special treatment (no debayering or anything like that) - It will only have less sensitivity. ASI185 could be a good choice due to pixel size and the fact that it is also sensitive in NIR.

 

Link to comment
Share on other sites

I doubt any camera operated by a PC will keep in synch accurately enough; better to us the rotating shutter, ideally one controlled by a PLL arrangement to lock on to the pulsar frequency.

OR

Just grab a huge number of frames, align them and use software to locate the pulsar and write a routine to score the frames according to pulsar brightness (if the frames are aligned and cropped in a similar way to PIPP this should be feasible by summing the pixels with an defined area of interest around the pulsar). depending on whether or not the previous frame was brighter or darker, (You will need to come up with some trickery to deal with the double pulse), if you know the approximate frame rate it should be possible to classify frames according to where they are in the pulsar cycle.

Group the frames into ~1/30 second 'buckets' and stack them as 30 frames, then generate an AVI or animated GIF.

Link to comment
Share on other sites

26 minutes ago, Stub Mandrel said:

I doubt any camera operated by a PC will keep in synch accurately enough; better to us the rotating shutter, ideally one controlled by a PLL arrangement to lock on to the pulsar frequency.

OR

Just grab a huge number of frames, align them and use software to locate the pulsar and write a routine to score the frames according to pulsar brightness (if the frames are aligned and cropped in a similar way to PIPP this should be feasible by summing the pixels with an defined area of interest around the pulsar). depending on whether or not the previous frame was brighter or darker, (You will need to come up with some trickery to deal with the double pulse), if you know the approximate frame rate it should be possible to classify frames according to where they are in the pulsar cycle.

Group the frames into ~1/30 second 'buckets' and stack them as 30 frames, then generate an AVI or animated GIF.

No need to keep the sync in terms of accurate fps. We just need good timestamp - it can be start of frame, for example. I'm pretty sure that could be achieved with <100us accuracy - that is good enough to align 3.3ms frame. Also frame it self does not need to be perfectly aligned to stack position - it can fall anywhere between two stacks, we will just use linear interpolation - for example if frame center (frame start +1.65ms) falls one third the way between two stack - we simply add 1/3 of frame values to first stack, and 2/3 values of frame to second stack, and add 1/3 and 2/3 to stack divisor. We simply sum stack and keep divisor to divide at the end (not necessarily needed if we calibrate each frame, but if we do calibration on complete stack then stack should be stacked using average, rather then sum).

Also alignment of frames will not be possible - since single frame will not contain enough signal to locate pulsar at all. We just have grab frames and stack them without alignment and hope that tracking error is small enough <1" or so. I think this is achievable with OAG, we just need to check possible field rotation over 6h.

This is currently my main concern - keeping thing that is not visible on the frame - centered on the frame for 6h :D . Also not sure how to approach centering in the first place. Goto is simply not precise enough to put pulsar close to center of square that is order of 1' by 1' - it would have to have something like less than 20" pointing precision.

Guess we need precise coordinates of object + plate solving, some way of saying, I want this spot on large image to be right in the center - not sure if there is software that will do that - I just started using SGP, there might be option for centering on selected spot or something like that, need to check it.

Also, OAG is in the post, so I'm expecting it to arrive over the next week - hopefully I'll get few clear nights to do PEC and assert performance of OAG (and I'm hoping for the best).

This is the sort of the thing that I'm trying to achieve:

Screenshot_2.png.c990b5eef1d42f1481a9bf5bcc4d67de.png

Left is the single frame out of 1000 frames - right is average stack without centering of the frames. These frames were taken so I could try to recover PSF for purpose of deconvolution of Jupiter image I was trying to capture on the same night. My idea was to use lucky imaging (bad seeing frames rejection) + alignment on centroid from top X percent values of pixels - something similar to what MetaGuide does when it shows star profile, but unfortunately I did not set exposure time right - I ended up with saturated pixels on good frames.

But this shows that if tracking is ok - in this case 1 minute unguided, and I'm over the Moon with this one since it means that I've managed to control 13.6s gear mesh oscillation of +/- 1" on my HEQ5, just stacking without aligning frames will produce image similar to long exposure image - so seeing and tracking errors will be present - this is why we have to assume at least 2"  seeing disk under good conditions.

Btw image scale on this image is somewhere around 0.14"/pixel - since its aim was planetary imaging, and good resolution on airy disk / optics PSF. We will use smaller resolution for pulsar imaging - hence less pixels will receive light in order to minimize the impact of read noise.

 

So the next step, as I see it, would be to insure technical / mechanical side is up to the task. We should probably do tests like this: Find a relatively weak star (something in order of 12-14 mag) - we can use speadsheet as a guide depending on total recording time (we start at let's say half an hour, and then increase imaging time to couple of hours, including meridian flip) and possible frame rate, and just shoot the movie - test 16bit, test 8bit variants, and do simple average stack to see if we get star profile looking like one on above image. If we can do that - next thing to do is to write stacking software - which in this case - unaligned stack with known period, will be relatively easy thing to do.

Link to comment
Share on other sites

Btw, which one are we imaging?

Screenshot_3.png.a602578a85eb386f15e02556c3679420.png

Red one or blue one?

I just took a look at that Hubble M1 image and spotted two triangles looking just like that animation previously posted, I'm guessing Red one, but not sure ...

Link to comment
Share on other sites

Hmmm Im not sure either. I was thinking of just punching in the RA & DEC and get my system to center it using plate solving.

Guiding, yes, I still need some work in that area. I'm using an OAG but sometimes my error is like 5" which is very bad. Need to figure this out. The CPC1100 fork mount is not ideal.....

Regarding the timestamping, yes I think it should be possible to timestamp the frames with enough precision to slot the frames into the phase and stack them.

Link to comment
Share on other sites

  • 2 months later...

Hi,

I'm not sure weather to contribute here or to start a new topic and link this one there. Anyway...

I'm resurrecting this topic and call for all interested parties to participate.

Me any my friend Lovro have started an attempt to image the M1 pulsar (or is it? More on this later) here in Croatia. Both of us have very limited funds for our projects so we are  starting a collaboration between our astronomy club Beskraj and Team stellar (who will provide an AZEQ6 mount) and we hope to receive help here regarding the processing of data from you. 

I have an OrionUK 10" f/4.8 newt and an ASI178 color which I got from selling my 16" homemade primary, Lovro has an ASI120 and experience in astrophotography, we are still missing an OAG. Also we could ask an acquaintance to lend us his massive AstroPhysics  mount if the AZEQ6 proves to be too shaky. I hope to get decent data from the 10", I have regular access to fairly dark skies so that should help. Capturing the data shouldn't be an issue but timing will. I get around 331 fps out of 333 fps for 3ms exposures when testing here at home...

Hope to hear from you soon,

Marko

Link to comment
Share on other sites

Sounds a great project Marko, keep us informed of progress. You may have to do some very clever stacking to get this to work, accurate frame timestamps will be critical.

33 minutes ago, m a r k o said:

M1 pulsar (or is it? More on this later) here in Croatia.

I had no idea there were any pulsars in Europe! :icon_biggrin:

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.