Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

symmetal

Members
  • Posts

    2,409
  • Joined

  • Last visited

  • Days Won

    3

Posts posted by symmetal

  1. The centre dot does look to be the best candidate to go for. The AR window and filter likely shows reflections from both surfaces which blurs the result a little causing the flares.

    The pixel array does seem to present unexpected results. Some cameras show a pixel dot close by the bright dots and some don't. You would think moving the laser slightly would make the pixel reflections move the same and at one point a pixel would be directly under the bright dots but that doesn't appear to happen like that in reality. 🤔

    Alan

    • Like 1
  2. The pixel array reflections, which are dimmer, don't tell much about tilt as they have curved microlenses in front the pixels. If your tilt currently shown in your images is small I would start with the dot that doesn't move much.

    On some cameras the cover slip reflection is much brighter than the other dots so is easy to spot, but on other cameras like you've found, the dots have a similar brightness.

    Alan

    • Like 1
  3. Glad to see you've got it working. It difficult to say which is the right dot. The other one is very likely from the AR window as you say. If you deliberately tilt the camera assembly slightly both dots will move and the dot reflected off the cover slip should move further as the total light path is longer. If you still can't tell you'll have to choose one dot and centre that and take a test image. If your image looks good with no tilt effects then you chose the right dot.

    It depends on how well the camera is put together as to whether both dots are stationary when the tilt is corrected. If the AR glass is accurately parallel to the sensor then yes, they will both be stationary. I've found in some cameras that's the case, while in others it isn't. If they are both stationary then you're good to go but if not you'll have to choose one and do a test image.

    When doing the tilt adjustment, for checking how much the dot moves, mark a cross on the piece of paper, and place it under the dot you've chosen and see how far it moves from the cross while you rotate the camera. If you can make a tilt adjustment while still on the rig, adjust it to bring the dot half way back to the cross, recenter the cross and repeat the process. If you have to remove the assembly to make adjustment, stick a label by each tilt adjustment so you know which one you're adjusting each time. You'll soon get the hang of which to adjust by visually assessing the position of the dot relative to the tilt adjustment positions.

    Yes, the thickness of a piece of paper can be the amount of adjustment needed to fix the tilt.

    Alan

    • Like 1
  4. Hi @wimvb, That's a pity Wim. There are some Hubble Pictures of NGC147, but they only show detail around the core which doesn't help show any debris that is indicated on the Local Group poster. I wonder where they got the information that there is debris as indicated.

    This other NASA image  of NGC147 and NGC185, shown below, has far too much jpeg compression with noise reduction to show anything interesting. It seems the Cassiopeia II dwarf is also part of this stable binary galaxy system though. Not sure if that's the one you imaged. 🙂

    NGC147NGC185satellites1024.jpg.86ad279b4b9d5f72f0adf316850ea92c.jpg

    Alan

     

  5. 2 hours ago, Stuart1971 said:

    Thanks very much, just a couple of things that I can’t get my head around, I have the QHY268c with the IMX571 sensor, surely if I aim the laser certain parts of this, when I rotate the camera, the  beam could go off the edge of the sensor and then come back on..? so does that mean ideally it needs to be aimed near the centre…? 

    I am trying to visualise the sensor being tilted, say long edge to long edge, but if the beam was pointed bang in the middle, then would the dot not then move when rotated as the centre would be pretty much still….or am I missing something….??

    I have  taken the laser out of a Baader collimator I had, as I don’t use it, it’s a 1mw but it’s a class 2 red beam, I assume this will be fine as otherwise Baader would not have sold it for this purpose…?

    sorry to be dim…

     

    Yes, if aimed near the sensor edge it could go off the edge, so it needs to be inside the diameter of a circle that covers the sensor, so aiming near the sensor centre will avoid this happening, my point being in response to your initial question that if it was aimed at the centre it wouldn't show the tilt error which isn't the case. 😉

    If the sensor is tilted as you say, then as the camera is rotated, the centre of the sensor would stay in the same place vertically, but the angle the sensor cover slip surface plane presents to the laser won't stay the same, which is all we're concerned with, and so the reflected dot will rotate about a centre point. If you move the laser a little so it's pointing at a different point on the sensor the change in cover slip angle angle presented to the laser won't change, though the cover slip will move up and down a little as the camera is rotated so the reflected dot will describe a larger circle, but this will only be apparent at large tilt angles. As we're only talking about correcting tilt errors of a small fraction of a degree this change in reflected circle diameter is not really noticeable. Once the tilt is elimated then the cover slip presents a constant angle to the laser when the camera is rotated no matter which point on the sensor the laser is pointed at, so the reflected dot doesn't move. Hope that explains it better.🙂

    Sorry, the 1mW I use is labelled Class 2 and not Class 1 as I stated earlier. Class 1 lasers are rated at below 0.39mW and Class 2 are below 1 mW so you will be fine with your Baader 1mW laser. Class 3R are rated at 1 to 5mW. ebay banned some laser pen sellers for selling 1mW Class 2 lasers which were actually Class 3 10mW lasers. Class 3B actually allows power up to 500mW. The ebay 1mW Class 1 stated lasers probably are marked Class 2 but that's OK. Laser classes. It's all a bit confusing. 

    Alan

    • Thanks 1
  6. It doesn't make any difference where on the sensor the laser is pointed as long as you get a visible reflection off it. The sensor and cover slip glass are optically flat, one would hope, 😉 and the angle of reflection is the same from any part of the sensor.

    Laser pens like this are commonly used as they are cheap and plenty available from ebay. Just check it's a 1mW one, and has Class 1 on the label. Red or green is fine though red seems most commonly used. I made a 3d printed holder to keep the pen at an 10 degree angle and it can then be easily moved around for the clearest reflection if you have a filter wheel, tilt adjuster etc. also fitted to the jig which reduces the available sensor area visible to the laser.

    It's the flat cover slip reflection which you want to adjust on. It's generally the brightest reflection spot, though there may be two or three reflection spots close together. Rotating the camera and see what group of bright dots move the least points you to where to start. Moving the laser pen a little should make the pixel reflections move so they can be eliminated. If you can't tell which is the right spot you have to choose one, adjust the tilt so it doesn't move when the camera assembly is rotated, and try it on the scope for a test image. If it looks good you're OK. On my ASI6200 it took two tries and I had to choose the second spot before the tilt was gone. 🙂

    Alan

    • Thanks 1
  7. Ah! Didn't notice that your Toupsky and FC had different debayer patterns selected. There isn't any actual rule that specifies how bayer pixel patterns should be represented, and TL, TR, BL, BR it seems has been adopted by the majority, but not all.

    A vertical image flip is done by some Capture programs, Astroart being one again, presumably to make fits files more 'standard', as fits data was primarily used for storing graphical data, where coordinate 0,0 is at the bottom left. Raster image files have evolved where pixel 0,0 is at the top left so vertically flipping the image makes 0,0 at the same place in the fits file. I assume Toupsky can store single fits images, and they just used the same data orientation in their video files. 🙂

    The data from the camera has no information as to what the bayer pattern is, the camera itself doesn't 'know' or 'care' if there is a bayer filter mask or not, it just reads out the brightness value of consecutive pixels. That's why it's helpful to specify the bayer matrix in FC via the Ser file header, even though it's recorded raw, so that as you say, subsequent programs don't have to guess or be told each time. If you specify the wrong one though as you've found, it can cause havoc. 😊

    Yes, align RGB in FC does register the RGB channels on top of each other, if you don't have an ADC to do that optically.

    To adjust colours in Photoshop select the menu option 'Window / Histogram' to show all three colour channels above each other so you can set the colour gains more easily. Setting the colour of the black end of the histogram is easy if the data isn't clipped to black initially, and hovering the curser over the dark background, use levels or curves to make the background pixel value the same value just above zero.

    To avoid the image data being black clipped set the camera 'offset' in FC 'camera settings' high enough that the hump at the left of the histogram has its peak away from the left hand edge.

    Your final image, although nice colour, doesn't have much detail as you say. How did you focus? I use a bahtinov mask on a bright star beforehand. Also keep the camera exposure around 5mS to hopefully freeze the 'seeing' and don't worry about using a high camera gain to use about 70% of the histogram on the target. It may look very noisy in the preview but stacking the frames in AS3 will remove the noise. Get your fps rate as high as possible by using a short exposure as mentioned, record in 8 bit, and select 'high speed' in the FC camera settings if available. Don't use gamma during capture, leave it at 50, which is the same as turning it off, or just turn it off. Applying gamma adjustment to each frame involves significant processor time which slows your fps. You should hopefully then be getting around 200fps which will give you 24,000 frames in a 2 minute video and you can select a 10% or so stack in AS3 to only use the best 2400 frames in your final image, which is plenty enough to allow lots of sharpening in Registax wavelets before it gets too noisy. 😀 

    That should help you get sharper results, Tom. Good luck.

    Alan

  8. Glad it's working now.  If you still need help with using it with Stellarium this video should tell you everything you need. 🙂

    You're using COM11 for EQMod which is fine, but this implies you have lower numbered COM ports which have likely been grabbed by previous failed attemps to get Eqmod to work and haven't been released properly by Windows.

    To free up those blocked COM ports run Device Manager and expand the Ports (COM & LPT) section. Now from the menu at the top select 'View/Show Hidden Devices'. You'll find a number of greyed out entries have appeared with a numbered COM Port. You want to delete these greyed out entries to free up those COM ports. Right-click on each greyed out entry and select 'Uninstall device' and select 'yes'. You can also say yes to delete driver software if it asks. If you don't have the EQMod cable plugged in then the COM11 entry will also be greyed out so don't delete that one. You should have a lot of free lower numbered COM ports by now.

    If you want to use a lower numbered COM port like 3 or 4 for EQMod, ensure the EQMod cable is plugged in, and you can reassign it's COM port by right-clicking on the entry as before and select 'Properties'. On the box that appears select the 'Port Settings' tab, and on the next box select 'Advanced'. Another box opens and here you can select the new COM port number from the drop-down list which doesn't have '(in use)' next to it. When done hit OK, and OK to get back to the device manager screen and the new COM port number should be assigned to your FTDI EQMod cable. Just remember, next time you have EQMod running, in EQMod settings, select the new COM port you've chosen rather than COM 11. That should be it. 😀

    Windows itself can allocate COM ports up to COM256 but most programs don't work with COM ports above a certain number. As you've found out EQMod can only use up to 16, while some older programs only recognize COM 1 to 8 so using numbers up to 8 covers all eventualities. COM1 and COM2 are reserved for your PC's motherboard to use so COM 3 is the first one usually available for programs to use.

    Alan

  9. Hi Tom,

    When you view the debayered preview image in FC at 100% size, I assume the vertical stripe pattern you see disappears, and you just see a fine pattern of coloured dots. This is what the wrong debayer pattern selected will  likely look like.

    I don't know why the Toupsky undebayered looks different to the FC undebayered image. Look at them both at 100% size, not reduced size, where the stripes appear. Both should show a fine dot pattern of different brightness corresponding to the coloured filters in front the pixels. If the Toupsky one doesn't show this dot pattern then it's possible it's averaging the group of four pixels for the preview image. 

    It's likely that RG is your correct debayer pattern. I found that Astroart when you choose a debayer pattern they are displayed rotated 90 degrees so you end up choosing the wrong one. An easy way to test this once and for all is to look at a scene where you know what the colours are. Zwo give a wide angle lens with many of their cameras so you can use them like a normal camera and point it out the window and then choose a debayer pattern where the grass is green and the sky is blue.  🙂 Then you know it's right.

    If you dont have a lens to fit on the camera, then in daylight just place something with one strong colour like a book cover or piece of coloured card, fairly close to the camera sensor and see the colour of the blurred image you get. Choose a debayer pattern where the colour is closest. Confirm you have the right selection using another book or card with a different colour.

    If you have the camera on the scope just place the coloured object fairly close in front of the scope to check the colour.

    Your last two Toupsky and FC images look to be the same debayer pattern. The FC one is just brighter as a different gain has possibly been used. 

    The image looking yellow is quite normal as usually a lot of blue gain needs to be used in FC or any capture program to get a more balanced colour image.

    Alan

  10. If the colour camera image isn't debayered it will always show the 'grid' pattern as you've seen as the R, G and B colour filters in front of the pixels let through different amounts of light, which you'll see as this repeating pattern of different brightness dots over the image. Depending as to how much the image is zoomed to fit on screen this will appear as grid of lines of varying spacing, but this is just a result of the image resampling the undebayered image to fit it on the screen, so don't worry about the lines. At full 100% zoom they won't be there, nor once the image is debayered later in processing and is then resized on screen.

    As shown in Craig's picture your Bayer pattern is GBRG which is different to the RGGB present on many cameras and which is the default chosen by FC. Switching debayer on/off in Firecapture only affects the preview image and not the recorded image, unless you click the down arrow next to the debayer icon and tick the 'Force record in colour, not recommended' box. You should in reality never record debayered as the debayer algorithm in FC is rather crude and it consumes a lot of processor time debayering each frame. You also have the option to not debayer the preview during capture which may help with improving fps if you don't have a powerful enough PC. Preview is updated slowly during capture so debayering the preview doesn't take much processor time.

    Selecting the wrong debayer pattern may still give a visible 'grid' pattern as the different brightness dots are now assigned to the wrong colours on screen. Looking at a 'normal' subject, green pixels are usually the brightest, then red pixels, with the blue pixels being darkest.

    You need to select GB as your debayer pattern in FC. You only need to specify the first two pixels of the group of four as the other two pixels are inferred by definition. Green pixels are diagonally opposite on cameras used nowadays. Your selecting GB means the second pair of the four will be RG by default. FC will store the selected bayer pattern in the Ser file header as Craig says so if it's selected wrong in FC it will be wrong in AS3, though it can be overridden in the AS3 menu.

    Calling the pixels Gb or Gr in the bayer pattern just refers to Gb pixels being recorded on the same line as the blue pixels, while the Gr pixel are on the same line as the red pixels. This may help the debayer algorithms in selecting pixel patterns but you can ignore this distinction when specifying the bayer pattern, as it's again inferred by looking at what the pixel next to it is. Camera data is read out line by line and not in a square groups of 4 pixels so the data sent out by the camera will be a line of alternating G and B pixels followed by a line of alternating R and G pixels and so on repeated until the end of the frame.

    Hope that helps Tom. 🙂

  11. That's looking great Olly particularly as it's RGB and mainly Photoshop. 🤗

    My third RASA 8 arrived this afternoon after my second flarey one was returned last week. Celestron told FLO that they have personally checked the latest one and it doesn't have the flare issue so fingers crossed. Celestron were initially of the opinion that the flare issue was limited to US delivered scopes, having checked a sample of UK bound scopes and found them all good. After FLO told them that at least two were bad, (mine) they are being more thorough in testing.

    Don't know when I can test it out though as the future weather outlook isn't good. 😟

    Alan

    • Like 1
  12. Your ASI462 has 2.8μm pixels and being a colour camera by the calculation above your optimum focal ratio is f17.4

    vlaiv suggests pixel size x 5 for colour or mono cameras which means your focal ratio should be f14.5

    This suggests a 1.5x barlow to be theoretically closer to what you need. With your 2.5x barlow you're oversampling so you should resize the final image to 60% size to give the best looking image with the detail available. Your bigger image will be softer but show no more actual detail than the 60% one. The drawback with oversampling is that your recorded image will be dimmer compared to using optimal sampling, so to compensate you either increase the exposure which reduces your chance of freezing the seeing as well as reducing the framerate, or you increase the camera gain and so need a longer video to help mitigate the increased noise.

    The ASI678 has 2.0μm pixels so your native f10 is exactly 5x pixel size so no barlow required. The image size will be similar to your 60% reduced one from above with similar potential detail. Your recorded image will be brighter though which avoids the oversampling pitfalls mentioned above.

    Jupiter is bright so you will likely get a reasonable exposure duration, and therefore framerate with your ASI462 and 2.5 barlow, so when 60% resized, the result may not look any different compared to the ASI678 without a barlow.

    Saturn and Mars are dimmer objects though so the drawbacks of oversampling will be more apparent and these will likely look better with the ASI678.

    3 hours ago, newbie alert said:

    So the op is using a F10 SCT, so in theory he should be using a pixel size no smaller than 3.333333 to image at its native FL and any larger he should be using a focal reducer?

    Doesn't seem to stack up between theory and practice to me

    iwols is using a colour camera so optimal focal ratio is x5 or x6 depending on whose reasoning you take, so 2μm pixels is pretty optimal at native FL. Any larger pixel size would then require a barlow to increase the focal ratio rather than a focal reducer. 🙂

    Alan

    • Like 1
  13. 1 hour ago, wimvb said:

    I’ve imaged a few dwarf galaxies before, among others the Ursa Major dwarf UMa1. Atm, I’m working on an image showing three dwarf galaxies belonging to ngc 7331.

    Yes, I remember you posting them here. 😀

    I've currently done the Ursa Minor Dwarf, Draco Dwarf, Regulus Dwarf and Leo II Dwarf. 🙂

    I found this great image on Wikipedia showing the Local Group with all the Dwarf galaxies, and how close The Milky Way and Andromeda are, along with there future collision location. Your Cassiopeia Dwarf location is shown too. Would be nice as a wall poster but would use a lot of ink. :D

    The Canis Major one I mentioned before is listed here as unidentified. This image and the table on the Wiki page is as good as any to use. 😀

    Thelocalgroup.thumb.png.19797f9bed513f0bee33d0db4f738c3b.png

    Alan

     

    • Like 1
  14. Thanks for posting Wim. Impressive. I've imaged a few dwarfs before as they don't get the attention compared to the usual objects, and an image you create can easily be one of the best amateur ones around. 😊

    I've been searching for any reference information on dwarf galaxies and there aren't many. Two I've found are

    The observed properties of dwarf galaxies in and around the Local Group, McConnachie, A.W. 2012, with an updated 2019 table a bit further down.

    Dwarf galaxies of the Local Group, Whiting A. B. 2007

    The Whiting paper has two tables, one of Local Group candidates and one of Local Group non-candidates and most of the commonly named ones seem to be in the non-candidates list. I've haven't checked what is meant by non-candidate though.

    There doesn't seem to be any 'official' convention in their naming so it's hard to integrate the lists. A nice neat table of various names, RA, Dec, apparent size and apparent brightness would be handy. Maybe I'll try and make one. 😛

    The closest Dwarf Galaxy to us is what's left of the Canis Major Dwarf, also called the Canis_Major_Overdensity which is 12 degrees across 😮  and is now located within the Milky Way. It has been torn apart by tidal disruption such that it forms stellar streams which wrap around the Milky Way three times, called the Monocerus Ring. It's hard to see the core of the Dwarf apparently, as it's obscured by dust, and I couldn't find any wide field images of that area that shows it. It's also disputed as to whether the Monocerus Ring is derived from the Canis Major Dwarf though.

    Alan

     

    • Like 2
  15. @Laurieast Thanks Laurence. It does give good images on the planets, though For the Moon, I find that a refractor and Powermate gives better images. 🙂 I do have to check the collimation each time I wheel it out the shed as it's likely to have gone out. It's possible the Bob's Knobs are a little looser in their threads compared to the originals or the large temperature changes particularly in the summer make them shift slightly. 

    Your image looks good. You just need to set the background colour though. 😉

    @Kon Thanks. In poor seeing I think Saturn gives a better image as you don't need such detail to make it look good. The rings do that by themselves. 😀

    Alan

  16. Had a go at Jupiter, Saturn and Neptune tonight. Seeing nowhere near as good as on the 19th, so Jupiter's not looking as good as the image I posted then, even though it's 2 mins worth rather than 30 secs.

    Both images 5mS exposures, 120 sec videos at 200 fps, 10" LX200GPS, 2x Powermate, Zwo ADC, ASI224MC.

    Jupiter gain 350 and 15% stack, so 3500 frames, Saturn gain 490 and 20% stack, so 4700 frames, Processed in AS3, Registax wavelets and Photoshop.

    If anyone wants a go themselves, I've included the raw stacks. 🙂 

    81267794_Jupiter2022-09-25-0002.png.5547ec32d12d317eff47231e3bfb214e.png Jupiter, 2022-09-25-0002, 5mS, 200 fps, 15pc of 23000 frames.tif

     246115150_Saturn2022-09-24-2204.png.4231149f5f3cf6ec3c1f99b46cabda70.png Saturn, 2022-09-24-2204, 5mS, 200 fps, 20pc of 24000 frames.tif

    I need to increase the offset for Jupiter as Registax creates a wide black clipped ring around the edge so I had to clip all the background. Saturn doesn't for some reason despite similar processing.

    Europa is also featured. If I'd been a few minutes earlier I'd have had Io and Europa just off the right edge.

    Having trouble stacking Neptune as AS3 seems to be locking on the noise, as I used very high gain with 20mS exposure and the image is too dim to separate from the noise. PIPP was able to stabilize and centre the planet OK but AS3 is causing the stacking to jump about again.  

    Alan

    • Like 7
  17. Device Manager is showing your EQMod cable in 'Other Devices' with an exclamation mark which means Windows hasn't found a driver for it. Odd, as FTDI drivers have been included with Windows for around 10 years or so. Right click on the FT232R USB UART entry in 'Other devices' and select 'Uninstall device'. Select Yes if it asks if you want to uninstall the software as well. Then unplug and replug your EQMod cable and see if Windows can then find the correct driver for it and then it will be installed and listed in 'Ports (COM & LPT) with an assigned Port number for EQMod. If this doesn't work, you can download the drivers from here

    https://ftdichip.com/drivers/vcp-drivers/

    Choose the Windows Desktop 32 or 64 bit version as appropriate, or if your not familiar with extracting drivers and pointing Device Manager to where it can find the driver to use, choose the Setup Executable on the right which should do it for you. Unplug and replug your EQMod cable and it should find the driver, install it, and then list it in the  'Ports (COM & LPT)' section of Device Manager where it will be assigned a COM Port number which you can then select in the EQMod setup dialog box. The Auto Detect option in EQMod is a bit flakey so it's best to assign the port number directly.

    EQMod is showing PORT 16 as it did an auto-detect from ports 1 to 16 and couldn't find anything so just stopped at 16.

    Alan

    • Thanks 1
  18. 4 hours ago, sorrimen said:

    Bit late to the party, but I’ve often found that with more frames it may look softer pre sharpening. When you take it into registax however, you can push it much more without having to denoise in response, ending up with the better image. I’m not super experienced, so Neil and co.‘s advice is more important, but this may at least explain why you were seeing the difference pre registax. 

    @sorrimen Hi. I agree that pre sharpening it's difficult to spot any difference between the stacks but I did apply the same wavelet enhancement to each stack and then compared the results and the difference between them was much more apparent then. 🙂

    Alan

  19. 52 minutes ago, CraigT82 said:

    I've had a go at your 300 and 3000 frame stacks, just to see the difference that ten times the frames gives you.  Same processing in Astrosurface (I applied the processes to the 3000 frame image first then applied that same process to the 300 frame). No other post processing or touching up applied, not even any noise reduction.

    That's quite impressive Craig. You don't actually need a very large number of frames to get a good image. The 300 stack is of course more noisy which is giving the initial impression it's a sharper image, but more detail is certainly revealed in the lower contrast areas in the 3000 stack, even though the initial image was softer. I'll try 1 min and 2 min videos next time as well, tring to ensure the focus is good. 😀 Tomorrow is currently showing geen.

    Alan

  20. The image at f15 will be just over half the size of the f25 image.  For planetary (lucky imaging) you want a focal ratio that optimises your likelihood of getting the most detail, that is, not under or over sampling. This depends on the pixel size of your camera and also whether it's colour or mono.

    For a mono camera the optimal focal ratio is equal to the camera pixel size in microns x 3.

    For a colour camera it's the pixel size in microns x 6.

    The theory behind these figures is well explained in this article, which is in Dutch but Google translate works perfectly. 🙂

    Alan

     

    • Like 1
  21. 2 hours ago, CraigT82 said:

    As it's cloudy I've had another go, this time I made use of the Wiener deconvolution function in Astrosurface. Tried to define the detail whilst maintaining smoothness.

    First attempt left, latest right.

    Thanks @CraigT82 for your latest attempts, certainly smooth with good detail. 🙂

    I was wondering if you or the others, as an exercise, would like to try with different stack percentages to see if more stacked frames allows you to extract more detail despite it being slightly softer initially. I've zipped up a collection of 5%, 10%, 20%, 25%, 30%, 40% and 50% stacks ranging from 300 to 3000 stacked frames. The seeing was good as the AS3 graph indicates so even 50% stack has no frames below average. The one you've already processed was 15%.

    1101060884_4graph.png.d7e1915c8947c8848caa2ce1954eb850.png

    2022-09-19, Jupiter.zip

    Alan

    • Thanks 2
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.