Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

Wide-field upgrade path or narrow-field target selection?


Recommended Posts

Hi all,

So I've now "finished" my 200PDS rig - it's the best-corrected I can make it with a Paracorr, collimated nicely (though I do want to get a Catseye autocollimator to "check"), and the imaging train works well with good guiding. The 183MM-PRO is a nice camera that's worked well for me for a long while now. I'm only interested in AP for this rig - I'm treating visual as a separate set of requirements involving a large Dob at some point.

Trouble is, especially with the mild Barlow effect of the Paracorr, this is a pretty narrow-view scope now. The AFOV is about 40x26 arcminutes, meaning that targets really have to be quite small to frame nicely.

I've dabbled with larger targets doing mosaics, which naturally results in really nice high resolution images but is a lot of work and with the cloud we're having that's quite frustrating, e.g. the WIP image below:

M81-mosaic-take2.thumb.jpg.e631c9a62d98e385944bee759107171d.jpg

But other targets, even "small" ones like the Iris Nebula, can be hard to frame without losing a lot of context (in the case of the Iris, dust clouds):

NGC_7023_2021-02.thumb.jpg.8c23d452d6c5d6dda8c1862ce30f8e9c.jpg

So now I'm thinking - should I modify the rig (somehow) to get a larger AFOV, or do I just need to start picking on a different class of targets than my usual galaxies etc? Smaller objects like M82 rapidly start to fall into the "seeing limited" category for detail. I'm already essentially imaging at 0.42"/px which is definitely oversampled.

I could swap the 200P for something faster with a shorter FL; or I could go for a larger sensor, though I'd probably have to upgrade/replace a lot of my optical train north of the Paracorr and that'll get expensive quick (plus full-frame is expensive anyway); lots of options.

Interested in what people would suggest!

Link to comment
Share on other sites

I would suggest that you start splitting your subs and keep current setup with mosaic.

Fact that you are sampling at 0.42"/px - simply means that you are wasting too much time without benefit on highly over sampled image.

image.png.f18193210b1220945363df331d46454c.png

Above is 1:1 of you work in progress. Below is same thing at 30% of original:

image.png.1abbeab9ae603734aa9a0859bd5ee0f8.png

This is close to proper resolution for the level of detail you are achieving.

What I'm suggesting is to "split" each of your subs into 9 new subs - each containing every third pixel in X and Y. This means that you'll get 9 properly sampled subs from each of your current subs.

This in turn means that you can spend 1/9 of time on each panel - and do 3x3 mosaic in the same time it takes to do a single panel.

This technique can be used to get "shorter" focal length equivalent F/ratio scope if you know how to do mosaics. You clearly know how to do those so there is no much point in getting another setup.

 

 

  • Like 1
Link to comment
Share on other sites

54 minutes ago, vlaiv said:

This is close to proper resolution for the level of detail you are achieving.

What I'm suggesting is to "split" each of your subs into 9 new subs - each containing every third pixel in X and Y. This means that you'll get 9 properly sampled subs from each of your current subs.

This in turn means that you can spend 1/9 of time on each panel - and do 3x3 mosaic in the same time it takes to do a single panel.

This technique can be used to get "shorter" focal length equivalent F/ratio scope if you know how to do mosaics. You clearly know how to do those so there is no much point in getting another setup.

That makes some sense practically (though I'm still trying to get my head around the maths of it - basically the same logic as binning, right?) - and I agree I'm unusefully oversampled. I'm not sure how I'd go about making that work, though, in terms of actually generating those subsampled subframes from my current data. I'm predominantly using PixInsight at the moment for my processing, but I could probably hack something together to generate this with astropy.

Link to comment
Share on other sites

41 minutes ago, discardedastro said:

That makes some sense practically (though I'm still trying to get my head around the maths of it - basically the same logic as binning, right?) - and I agree I'm unusefully oversampled. I'm not sure how I'd go about making that work, though, in terms of actually generating those subsampled subframes from my current data. I'm predominantly using PixInsight at the moment for my processing, but I could probably hack something together to generate this with astropy.

Exactly like binning with added bonus of not having some of pixel blur associated with larger pixels (which is very small anyway and only maybe concern in planetary imaging).

You don't have to do splitting - you can do software binning 3x3.

If you want - I can share ImageJ plugin that I wrote that will split image for you and make several subs. You would use it after calibration and before you align and stack.

Link to comment
Share on other sites

2 minutes ago, vlaiv said:

Exactly like binning with added bonus of not having some of pixel blur associated with larger pixels (which is very small anyway and only maybe concern in planetary imaging).

You don't have to do splitting - you can do software binning 3x3.

If you want - I can share ImageJ plugin that I wrote that will split image for you and make several subs. You would use it after calibration and before you align and stack.

Right - but binning 3x3 the same images and creating copies then stacking doesn't actually put the different offset pixel data into each of those created subframes, right?

ImageJ plugin would be good to see if for no other reason than reference, certainly.

18 minutes ago, michael.h.f.wilkinson said:

The ASI183MM has a nice FOV and resolution for a small wide-field rig, like my APM 80 mm F/6 with 0.8x reducer

There is definitely scope for reuse if I ever decide to swap the 200P setup for something a bit more considered - it's a good camera, just needs to be paired with something a bit better suited to it and an 80mm frac would certainly fit the bill. That's a lovely shot.

Link to comment
Share on other sites

2 minutes ago, discardedastro said:

There is definitely scope for reuse if I ever decide to swap the 200P setup for something a bit more considered - it's a good camera, just needs to be paired with something a bit better suited to it and an 80mm frac would certainly fit the bill. That's a lovely shot.

Or just use a telephoto lens. I will be testing my ASI183MM-Pro with a 200 mm tonight

IMG_20210221_150800.thumb.jpg.6fcf0182cd83146124da1f617027d43b.jpg

Link to comment
Share on other sites

Just now, discardedastro said:

Right - but binning 3x3 the same images and creating copies then stacking doesn't actually put the different offset pixel data into each of those created subframes, right?

There is no point in making copies of binned image. Binning actually does SNR improvement and image reduction - you either do that, or you split images and then stack them. In either case, result with regards to SNR improvement is the same.

Bin x3 results in x3 SNR improvement and so does stacking x9 more subs - SNR improvement is equal to square root of stacked subs - stack x9 more, SNR improves by sqrt(9) = x3.

Only difference is that you end up aligning those 9 subs when stacking them and you don't align them when binning. If you are over sampled - there is minimal difference between the two and it also depends on resampling used for aligning.

6 minutes ago, discardedastro said:

ImageJ plugin would be good to see if for no other reason than reference, certainly.

Sift_Bin_V2.class

(place this file in plugins folder of ImageJ distribution).

Plugin works on 32bit images and in your case, you would be using following parameters:

image.png.f6d5461a5925f71a84c8df7f26a68df4.png

(binning will try to do some Lanczos alignment on slices, but I think it is a bit buggy - so use it just for splitting slices, subslices in separate stacks can be used for separating bayer matrix or similar - but in this case, leave it unchecked).

Link to comment
Share on other sites

@vlaiv is there any difference between software binning and just choosing to reduce the scale of one's stack?

Case in point - when i get my RC6 up and running with my 268M at some point, the image scale will be about 0.58" so obviously oversampled. My plan is just to capture as normal (Bin x1) but then in APP when the stacking process gets to the final stage (integration) I was just planning on choosing a Scale Factor of 0.5 (it uses Lanczos- 3 by default) to basically do the same thing as Bin x2 and bring the image scale up to 1.16".

Do you think this method would actually be better than capturing Binned x2 data at source? Apart from smaller file sizes, there shouldn't be much, if any, difference right? Although what about FWC? Wouldn't that be much higher if binning at source? 

Link to comment
Share on other sites

31 minutes ago, Xiga said:

@vlaiv is there any difference between software binning and just choosing to reduce the scale of one's stack?

Case in point - when i get my RC6 up and running with my 268M at some point, the image scale will be about 0.58" so obviously oversampled. My plan is just to capture as normal (Bin x1) but then in APP when the stacking process gets to the final stage (integration) I was just planning on choosing a Scale Factor of 0.5 (it uses Lanczos- 3 by default) to basically do the same thing as Bin x2 and bring the image scale up to 1.16".

Do you think this method would actually be better than capturing Binned x2 data at source? Apart from smaller file sizes, there shouldn't be much, if any, difference right? Although what about FWC? Wouldn't that be much higher if binning at source? 

Yes there is difference and in fact - using Lanczos-3 is one of the worst options for that (if you want SNR improvement).

You don't need to capture binned data - capture it at native resolution and then bin your data after calibration.

Binning and scaling down both reduce sampling rate and that is ok, but binning does better job of improving SNR. It has predictable and higher SNR improvement than any other scaling method. In fact - binning is one of the ways you can scale down your image (by integer fraction) and you can "extend" binning to include rational sizes and then it becomes very basic interpolation algorithm - linear interpolation. If you think about it - value half way between two value is just average of those two values. Mathematically there is no difference between bin x2 and linear resample + 0.5 pixel shift + x0.5 scale down (bin x3 or higher is not so simple).

Other resampling methods introduce pixel to pixel correlation and actually try to preserve fine grained noise. For this reason they don't improve SNR as much. In fact, best resampling method would not change SNR at all.

Here is an example:

image.png.790b7fb466df53716def23bdb3f9fc46.png

I generated simple 128x128 with pure Gaussian noise with sigma 1. Then I binned that image x2 and I resampled that image to 0.5 size with Quintic B-Spline (better than Lanczos3 but probably not Lanczos4).

Original image shows that stddev is indeed 1. Binned image shows what we would expect from it - exact improvement of x2 because it was binned x2 (noise reduced to half), but scaled down image, has noise reduced to only 83% instead of 50%. You can see that binned image is smoother.

 

Link to comment
Share on other sites

Thanks Vlaiv. 

What do you make of the 2nd post from the thread below:

https://www.astropixelprocessor.com/community/main-forum/binning/

Mabula (the creator of APP) seems to suggest that it's possible to bin x2 in APP by using the nearest neighbour debayering algorithm and a scale factor of 0.5, but that he actually recommends using Cubic B-Spline instead. 

I take it in your example above, the binned image is using nearest neighbour? What would the result look like if it were to use Cubic B-Spline?

I'd like to do some testing in APP, trying out various approaches, to see what produces the best results for oversampled images that one wants to bin after the fact. I would need some oversampled data though. @discardedastro if you would be willing to share some of your calibrated Lum subs with me for this, drop me a PM. It would be greatly appreciated. 

Link to comment
Share on other sites

9 minutes ago, Xiga said:

Mabula (the creator of APP) seems to suggest that it's possible to bin x2 in APP by using the nearest neighbour debayering algorithm and a scale factor of 0.5, but that he actually recommends using Cubic B-Spline instead. 

I take it in your example above, the binned image is using nearest neighbour? What would the result look like if it were to use Cubic B-Spline?

Not sure if that is correct. Binning simply averages (or sums) 2x2 (or 3x3, 4x4) groups of adjacent pixels.

Nearest neighbor - copies pixel value from nearest sample point - it does not do interpolation - it is exact opposite form binning - and resampling with lowest SNR improvement - none.

Here are results with addition of few other interpolation methods:

image.png.f96a7a5e2812ed72a1b757bee2cb744c.png

I've just added few other interpolation methods:

We now have - again the same as before - original, binned x2 and Quintic B-Spline (first row)

Nearest neighbor (stddev - unchanged ~1.0), Bilinear interpolation (same result as binning) (second row)

Qubic B-Spline and Cubic-O-moms in third row (stddev of ~0.75 and ~0.8).

As you can see - binning in this particular example is the same as bilinear interpolation. This only holds for 2x2 and for x0.5 reduction case. Things get more complicated in general case although math is based on similar premise.

Nearest neighbor leaves things unchanged - it does not improve SNR at all - it does not manipulate with pixel values - it just copies closest value over. This leads to terrible results in general interpolation - aliasing effects and distortion.

In general - more sophisticated interpolation method - less pixel to pixel correlation there is and less blur and artifacts it introduces into the image - but also noise is kept at the same level.

When choosing method for aligning frames - choose the best interpolation algorithm - one that preserves noise the best. This might sound like counter intuitive proposition but it actually helps with noise statistics and stacking then does a good job. Using bilinear or bicubic interpolation when aligning frames leads to coarse grain noise instead of fine grained noise in the image. It looks artificial.

Binning is different than that - since one resulting pixel depend only on group of original pixels and that group does not contribute to any other destination pixel but one discussed - there is no pixel to pixel correlation - there is no such sort of blurring and yet SNR is improved by precisely know factor.

There is different way (we could argue better way) of binning - and that is splitting image into "sub fields" and then stacking those. That is method that I recommended above.

It just splits pixels of original image into groups and forms smaller images out of those - no pixel values are changed and then you can stack those. If you "bin" 2x2 - that will result in 4 new subs from one original sub (each having x2 smaller dimensions or x4 smaller total pixel count) - and you know that stacking N subs improves SNR by factor of sqrt(N) - so we now have 4 subs - and we improve SNR by factor of x2 - exactly the same as binning - but we did not do any math on pixels - we did not change them in any way - just split them into different groups.

 

  • Like 1
Link to comment
Share on other sites

17 minutes ago, vlaiv said:

Not sure if that is correct. Binning simply averages (or sums) 2x2 (or 3x3, 4x4) groups of adjacent pixels.

Nearest neighbor - copies pixel value from nearest sample point - it does not do interpolation - it is exact opposite form binning - and resampling with lowest SNR improvement - none.

Here are results with addition of few other interpolation methods:

image.png.f96a7a5e2812ed72a1b757bee2cb744c.png

I've just added few other interpolation methods:

We now have - again the same as before - original, binned x2 and Quintic B-Spline (first row)

Nearest neighbor (stddev - unchanged ~1.0), Bilinear interpolation (same result as binning) (second row)

Qubic B-Spline and Cubic-O-moms in third row (stddev of ~0.75 and ~0.8).

As you can see - binning in this particular example is the same as bilinear interpolation. This only holds for 2x2 and for x0.5 reduction case. Things get more complicated in general case although math is based on similar premise.

Nearest neighbor leaves things unchanged - it does not improve SNR at all - it does not manipulate with pixel values - it just copies closest value over. This leads to terrible results in general interpolation - aliasing effects and distortion.

In general - more sophisticated interpolation method - less pixel to pixel correlation there is and less blur and artifacts it introduces into the image - but also noise is kept at the same level.

When choosing method for aligning frames - choose the best interpolation algorithm - one that preserves noise the best. This might sound like counter intuitive proposition but it actually helps with noise statistics and stacking then does a good job. Using bilinear or bicubic interpolation when aligning frames leads to coarse grain noise instead of fine grained noise in the image. It looks artificial.

Binning is different than that - since one resulting pixel depend only on group of original pixels and that group does not contribute to any other destination pixel but one discussed - there is no pixel to pixel correlation - there is no such sort of blurring and yet SNR is improved by precisely know factor.

There is different way (we could argue better way) of binning - and that is splitting image into "sub fields" and then stacking those. That is method that I recommended above.

It just splits pixels of original image into groups and forms smaller images out of those - no pixel values are changed and then you can stack those. If you "bin" 2x2 - that will result in 4 new subs from one original sub (each having x2 smaller dimensions or x4 smaller total pixel count) - and you know that stacking N subs improves SNR by factor of sqrt(N) - so we now have 4 subs - and we improve SNR by factor of x2 - exactly the same as binning - but we did not do any math on pixels - we did not change them in any way - just split them into different groups.

 

Thanks for this Vlaiv, i think this clarifies things for me now. Your approach of creating sub-groups of smaller files does sound good, but APP is my tool of choice for calibrating and stacking so i just need to find the best workaround using it, and by the sounds of things, it looks like choosing Bilinear and a Scale Factor of 0.5 at the Integration stage is the way to go. I'd still like to run some tests on real world data though, just to see the improvements with my own eye. 

@discardedastro sorry for derailing your thread! 🙏

Link to comment
Share on other sites

16 minutes ago, Xiga said:

Thanks for this Vlaiv, i think this clarifies things for me now. Your approach of creating sub-groups of smaller files does sound good, but APP is my tool of choice for calibrating and stacking so i just need to find the best workaround using it, and by the sounds of things, it looks like choosing Bilinear and a Scale Factor of 0.5 at the Integration stage is the way to go. I'd still like to run some tests on real world data though, just to see the improvements with my own eye. 

@discardedastro sorry for derailing your thread! 🙏

Why don't you try following:

- Calibrate your data in APP without aligning / stacking it (if that is an option)

- Use ImageJ to bin it to wanted size (2x2 or 3x3)

- Stack them in APP without calibration - just align and stack

Would that be possible?

Link to comment
Share on other sites

Yes I believe that's all possible. I've no idea how to use ImageJ though. Is there a quick way to just point it to a folder and have it bin all the files in it?

It would still be my preference to keep everything within APP, but it would be interesting to see if there was a noticeable difference between APP's version of binning (Bilenear and 0.5 Scale factor) and the method of feeding APP actual binned subs.

Link to comment
Share on other sites

7 minutes ago, Xiga said:

Is there a quick way to just point it to a folder and have it bin all the files in it?

Don't think so - but you can open multiple files at once (open image sequence, no need to select them - you can specify what files to open by name containing text, limit, skip sort of thing) - bin all of them with single click and then save again as image sequence.

I usually open in batches as subs tend to be large and I only open a number of subs in single go that will fit in my ram (to avoid swapping and such). I specify part of capture name, use limit and skip number of files that I already binned.

Link to comment
Share on other sites

11 hours ago, vlaiv said:

Don't think so - but you can open multiple files at once (open image sequence, no need to select them - you can specify what files to open by name containing text, limit, skip sort of thing) - bin all of them with single click and then save again as image sequence.

I usually open in batches as subs tend to be large and I only open a number of subs in single go that will fit in my ram (to avoid swapping and such). I specify part of capture name, use limit and skip number of files that I already binned.

Vlaiv, i went ahead and did a few test stacks using the only mono data i have, which was shot at 2.13", so more under-sampled than over-sampled, but it should hopefully still have some use for comparisons sake. 

The data set was just 6 subs (20 mins each) in Sii on the Pelican. I thought about using Ha, but decided on using Sii instead, as it was much fainter, so i thought it would accentuate the noise levels more. 

Firstly, I used ImageJ to Bin each calibrated sub x2 and then i stacked them in APP on default settings (which uses Lanczos-3 at the integration stage). Note, i would hope not to ever actually go down this route, as it would mean throwing away useful information before the registration phase of stacking, which is definitely not something i'd want to do. Plus, there's the actual conversion of all the files! 

Next, I then simply loaded in all the calibrated subs to APP and let it stack them with exactly the same settings as above, except i chose Bilinear and a scale factor of 0.5 at the final Integration stage. I'm no expert at analysing files, but as far as i can tell, the APP stack had less noise but also had noticeably less detail (even in a stack that was now effectively at 4.26"). Basically it looked a lot blurrier and not as good as the ImageJ stack, so to my eye, Bilinear and a scale factor of 0.5 in APP does not appear to be the same as binning x2. I do note that the level of stretch is not the same in the jpg below, the Bilinear one is stretched more (these are just DDP-stretches straight from APP) but when i compared them visually, i got them a lot closer and the Bilinear one still looked blurrier to me. 

Finally, i changed the interpolation back to Lanczos-3 and let APP do another stack at a scale of 0.5. To me, this now looks very close to the ImageJ stack. Would be interested to hear your analysis Vlaiv, but from what i'm seeing, i think i'd be happy to just go with this method when i shoot over-sampled data. 

ImageJ Binned Stack:

ImageJ_Binned_Stack-fy-0degCW-1.0x-LZ3-NS-St.thumb.jpg.39854c4c14b03805d9de28566bdef2bf.jpg

ImageJ_Binned_Stack-fy-0degCW-1.0x-LZ3-NS.fits

 

APP Bilinear Scale 0.5: 

APP_BilinearAtIntegrationStage-St.thumb.jpg.21073134bb2a5e3e4064f4e6cb91093b.jpg

APP_BilinearAtIntegrationStage.fits

 

APP Lanczos-3 Scale 0.5:

APP_Lanczos3AtIntegrationStage-St.thumb.jpg.d59232400c74cf12f00f5a52015e1eef.jpg

APP_Lanczos3AtIntegrationStage.fits

Edited by Xiga
Link to comment
Share on other sites

7 hours ago, Xiga said:

Note, i would hope not to ever actually go down this route, as it would mean throwing away useful information before the registration phase of stacking, which is definitely not something i'd want to do.

What do you mean by throwing away information? You don't throw away any information with this process.

7 hours ago, Xiga said:

Finally, i changed the interpolation back to Lanczos-3 and let APP do another stack at a scale of 0.5. To me, this now looks very close to the ImageJ stack. Would be interested to hear your analysis Vlaiv, but from what i'm seeing, i think i'd be happy to just go with this method when i shoot over-sampled data. 

I blinked the two and indeed - difference is minimal. I think binned version is very slightly better in terms of SNR - but honestly, without direct comparison - I don't think anyone could tell.

I did comparison once - 10-20% more noise is very hard to perceive visually. It is there but really on threshold of detection and depends on stretch applied.

You can do it by resampling down as long as you are happy with SNR gains you are getting.

I've found that best way to compare things is "split screen" approach.

Register both stack to the same reference frame and after stacking and prior to any other processing - do a copy/paste of half of one stack - over the other. Then proceed to process such image. Any difference between the two should be obvious after processing.

Link to comment
Share on other sites

45 minutes ago, vlaiv said:

What do you mean by throwing away information? You don't throw away any information with this process.

I blinked the two and indeed - difference is minimal. I think binned version is very slightly better in terms of SNR - but honestly, without direct comparison - I don't think anyone could tell.

I did comparison once - 10-20% more noise is very hard to perceive visually. It is there but really on threshold of detection and depends on stretch applied.

You can do it by resampling down as long as you are happy with SNR gains you are getting.

I've found that best way to compare things is "split screen" approach.

Register both stack to the same reference frame and after stacking and prior to any other processing - do a copy/paste of half of one stack - over the other. Then proceed to process such image. Any difference between the two should be obvious after processing.

The part about throwing away information, what I meant to say was, wouldn't binning x2 early in the workflow (ie right after calibration) not have a detrimental effect on star analysis and registration? Surely it would be better to keep the full resolution for this part of the workflow, and then reduce the resolution afterwards?

So it looks like for any APP users out there with oversampled CMOS data, it's a simple operation to effectively bin the data by just changing the scale factor at the integration stage, which is good news 😀

Edited by Xiga
Link to comment
Share on other sites

1 minute ago, Xiga said:

The part about throwing away information, what I meant to say was, wouldn't binning x2 early in the workflow (ie right after calibration) not have a detrimental effect on star analysis and registration? Surely it would be better to keep the full resolution for this part of the workflow, and then reduce the resolution afterwards?

So it looks like for any APP users out there with oversampled data, it's a simple operation to effectively bin the data by just changing the scale factor at the integration stage, which is good news 😀

Not really. Since you are over sampled - you have more than you need information in that sense. If binning gets you back at close to proper sampling rate - then you won't loose any precision.

Star centroid algorithms work very well. Even with single star you can get precision down to 1/16th of a single pixel - like in guiding. In average astro image - software will be aligning based on about hundred stars - that additionally improves precision by factor of 10. Binning "messes" up by factor of 2 (even when properly sampled) if at all.

In any case - we are talking about fractions of pixel that are one simply can't perceive in the final image (we would be hard pressed to notice shift by half a pixel let alone 1/10th of it).

Is the scaling down equally good as binning? Let's do some more tests. What if I need to scale down by .75 to get to proper sampling rate, or if I need to do it by 0.33 (equivalent of x3 bin)?

Here is a little experiment:

image.png.694fe12f9a219c5bf0223e8ed0b8653a.png

Again, I've created test image with noise of 1.0 and then scaled that to 0.9, 0.8, 0.7, 0.6 and 0.5 of it's original size. Scaling down does not produce nice continuous improvement in SNR - in fact scaling by 0.6 is worse than scaling with 0.9.

Now lets try 0.33 and see if that works like bin x3:

image.png.b571e0bbdfa094c40c00aee40304ab71.png

Oh, no, it's worse that bin x2 in terms of SNR improvement.

While method with scale 0.5 works to produce OK results - we should not assume it is proper solution and it will work in general. Btw - there is a way to do fractional binning as well - but it is complex topic.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.