Jump to content





  • Posts

  • Joined

  • Last visited

  • Days Won


Everything posted by vlaiv

  1. With CCD sensor you want to bin at capture - in hardware. With CMOS sensor like in ASI1600 I would go for software bin. It should not really matter if it is done by driver or software afterwards, except for two differences: 1. I'm not sure if there is loss of precision if done in driver (it should not be, since ADC is 12bit so there are couple of bits to spare and be used by binning). But this part can be easily "measured". Take one sub without binning and examine values - they should all be divisible by 16 (this is because only top 12bits are used by ASI1600 so lower 4 bits are set to 0). Now take one sub binned 2x2 by driver and examine file. If you still get all pixel values divisible by 16 that is not good - do it in software, but if you get all values only divisible by 4 then it is ok to bin at capture time. 2. By binning in driver (at capture time) you end up with less disk space usage - each sub is 4 times smaller than unbinned (this is true for both CCD and CMOS). Binning benefit is really simple - in CCD sensor it just simply uses 1 read per 4 pixels so you get SNR increase per pixel (or in this case bin) - this is because the way that CCD reading is done - all electrons are "marshaled" off the chip column by column (in standard mode, or 2 columns at a time together when binning) to off chip amp and ADC. So amp and ADC operate on electrons from 4 pixels (in 2x2 bin) together and inject only one "dose" of read noise but signal is x4 (4 pixels added). So SNR is increased. With CMOS sensors this cannot happen since technology is such that each pixel has its own amp and ADC unit. Read noise is injected prior to binning. So CCD benefits more from binning than CMOS (in terms of lowering read noise). Now binning in itself also increases SNR related to other sources of noise (such as dark current, LP and shot noise) but at an expense of loss of resolution / image scale. If still confused about how binning increases SNR - just think of it this way: when you stack 4 subs you increase SNR - similar thing happens when you "stack" 4 adjacent pixels to form single one (binning is form of stacking - adding values). I personally bin the stack at the end. For some reason (purely my fiction) I believe that aligning unbinned subs will yield better "sub pixel" precision for star alignment than if star positions are measured on binned subs (justification for this is - centroid algorithms can find position to a fraction of pixel - but if you "increase pixel size", fraction also gets bigger - hence loss of precision). This however might have slight effect on final SNR (I'm really not sure about this, since aligning is smoothing noise in sub a bit and introducing correlation between pixels, so it really depends if "correlating binned" pixels is the same as "binning correlated" pixels, but such math is beyond me).
  2. Well, I would be - it is very nice image. Just a hint, try processing it a bit differently - first gradient removal - it looks like image was taken when Orion was rising on the east (which is a bit obvious when we consider time of the year - it is still early for it to be fully south during the night), and there is quite a bit of light pollution in that direction so you have strong gradient from bottom right to top left. Next what I would suggest is to bring out dust patches seen in image (places where it seems that stars are less dense - they are there but obscured with interstellar dust). You should make them a bit darker to emphasize that they are there. It will add another layer of "dynamic" to the image.
  3. If I'm not mistaken, QHY163m has 2" barrel that also has female T2 thread bored on inside diameter - so you can choose to mount it via 2" or T2. I guess that your adapter to lens is T2 kind with male thread on camera side? If that is the case, then there is simple solution for mounting filters, and that would be something like this: https://www.teleskop-express.de/shop/product_info.php/info/p7514_Baader-continuous-male-T2-thread-with-1-25--Filterthread.html that fits in female T2 thread of camera, and you probably need https://www.teleskop-express.de/shop/product_info.php/info/p561_Baader-Adapter--Continuous-male-T2-thread.html to again go back to female thread from male thread of filter adapter - this combination will add 10mm of optical path. Other option would be, and this one probably does not require additional optical path: https://www.teleskop-express.de/shop/product_info.php/info/p9003_ZWO-T2-Filter-Holder-for-1-25--filters.html Just see the picture to get idea how it is used - there is enough T2 female thread left to just screw on your lens adapter as you would normally do. Each of these options let you place filter close to sensor so there will not be any vignetting, but you won't be able to change filters without dismantling everything.
  4. Yes, other way around is much simpler and better - so just add another dovetail bar on top side of the rings, and then bolt your lens - T2 camera adapter to it and you are all set. Here is image I found on net depicting what it would look like - just replace DSLR with QHY163 ....
  5. Honestly I don't know much about lenses and their performance for astro photography. I did however hear that good lens for astro work is Samyang 135mm F/2. There are more then a couple of threads on this forum with images done using this lens, so check it out. As for mounting, probably easiest thing to do is to mount lens & camera piggy back on ST80 and use ST80 for guiding.
  6. APOs are fast because they can make them fast - usually to control CA in doublets and ED doublets you have long to medium FL to make them good - like F10 and above for regular doublets and F7-F9 for ED doublets. APOs on the other hand don't have problem with CA if properly executed, so as they were aimed for photographers and most demanding observers in the past - they tend to be short in order to have higher speed instrument (photography wise), and also form factor - easier to mount and carry. Optical quality on the other hand is reflected in price of APOs, so yes, they are optically really good. It would not make sense to make expensive telescope that is not good optically - who would buy it? Here are examples of what you might get with achromat when imaging. Mind you, these two were taken small sensor OSC chip - so they are mosaics and because telescope was stopped down, SNR is not that good on them, so they lack depth, but do show sharpness that you can achieve with this kind of scopes (I believe first one is 3x3 mosaic while second one is 2x2) :
  7. I agree, that would be the best approach - just give it a go, and practice your skills. In general F5 achro will be at disadvantage to slower scope in terms of optical quality. Field curvature will be stronger, CA will be worse (this is not an issue for you), and optical correction will be worse. It is simply harder to figure fast lens to good optical quality then slow one.
  8. Just to add, my ST102 while optically ok for visual, has aberration that might impact NB imaging - I noticed a bit astigmatism in red part of spectrum, so also check the scope if it is diffraction limited in all bands that you plan to image. Well you don't really need to do that, but that might be another thing that you will notice on your images - slightly misshapen stars - and it might be because of poor optics.
  9. Yes, Star travel 102 (or 120) - big brother of ST80. I have one and I've done imaging with it to a certain success using OSC sensor, small one, so field curvature was not an issue but CA was, so I used Wratten #8 stacked to UV/IR Cut and stopped aperture and got decent results. If you search this forum there is also a thread dedicated to CA control using this technique - different diameter aperture masks and results with and without yellow filter. This is all for OSC. RGB filters should control CA a bit better (because you will be focusing to specific part of spectrum with each filter), and for NB filters CA is not an issue. Focuser on 102 (and I guess 120 but don't know for sure) will need some work / tweaking to get it into shape for imaging. And even tweaked it will not cope with heavy gear. You will need to adjust it so there is no play in it (otherwise it wobbles a bit when focusing, maybe not noticeable on visual but you will see image shift on sensor). I did not do anything special to it. Just disassembled it, cleaned it and re greased it and tightened it properly.
  10. This might be interesting read for you: http://www.astrophotography-tonight.com/astrophotography-comparing-three-80mm-refractors/ My advice is: don't bother with either changing ST80 focuser nor trying to find suitable flattener - just use it as is - crop frames at point where you find stars to be unacceptable distorted. This will give you opportunity to improve your imaging skills and also learn much about processing until you gather the money to step up and get your self a proper imaging scope + suitable corrector for it. If you struggle with 1.25" focuser, and want an upgrade, I would suggest instead of getting 2" focuser for ST80 to actually sell your ST80 (don't know how much used scope market is developed where you live, but in general ST80 should not be hard to sell - it is very versatile item) and buy your self a ST102 OTA- it comes with 2" focuser (also crappy one, but just live with it for now), and it will be a bit more suitable for your camera.
  11. List of resources that I use: Forecast (clouds and wind): Clear outside, Yr.no, Weather2umbrella (used to work better, it's kind of slow now days), Meteoblue Live cloud cover: Sat24 Seeing forecast: Meteoblue (Astronomical seeing), Netweather.tv (jetstream forecast) Aerosol optical depth (transparency): Copernicus Atmosphere Eu
  12. It will certainly not flatten the field, and since it is reducer it will "widen" field of view - which can only mean more curvature in the corners, but how much, I can't really tell until I try. I'll certainly post results once I use it, so you can see the results and then decide if you want to invest in buying it. There is however flattener for RC without focal reduction - you might want to take a look at it. For me, ideally there should be focal reducer and field flattener - I can live with field curvature on my sensor without focal reduction, but no one makes such thing yet.
  13. Many people image with eq5, so I believe it is good for imaging with short focal lengths (in telescope terms, meaning <= 600 - 700mm). If you use laptop you don't really need goto kit - eq mod + stellarium or some other planetarium software will serve as a goto. If you don't want to use laptop, then goto add on is an option, but not really necessary for imaging - it will only help find targets, but that can be done manually.
  14. 1. mount (get your self acquainted with EQ mounts, eq-mod, goto, tracking and such and of course stacking images) 2. Samyang 135 f2 lens (get yourself a bit more FL and see how it works for you) 3. SW 130pds (move on to "real" stuff ) that would be my priority list for shopping.
  15. Yes, sorted out some time ago - it was collimation issue - primary was out of collimation, once I fixed that, edge stars improved considerably, now there is almost no distortion up to the edge of field with ASI1600 sensor. I have not yet tried CCD47, hope to give it a go in next couple of weeks (now expecting shipment of full set of filters, both RGB and narrow band, so I'll probably toy around with NB filters when the moon is up - this will give me chance to test out reducer). I think that I'll probably still have issues with CCD47 in the edges of the image, it might be that careful focusing (2/3 - 3/4 of the center of the image) will help somewhat, but field curvature is such that for sensor size such as ASI1600 starts showing it at the furthest corners unless there is perfect collimation. I suspect that with perfect collimation and focal reducer there will still be some noticeable FC in the corners.
  16. I'm not sure that you are totally right. In my view you need to match Gaussian PSF FWHM (star FWHM on the actual image expressed in arcseconds, and subject to seeing, aperture airy disc and guide error combination - these all add up to form star image) to x3.3 (Nyquist for 2d) pixel resolution. So while 80mm objective will give ~1.7" airy disk (in green), actual optimal resolution for recording it is ~0.53"/pixel - and this is the case when there is no seeing / guiding error (in some cases used in planetary imaging). If you look at my previous post, calculation for specific parameters (650nm, 2" seeing, and 0.6" guide error) gives optimum resolution of ~1.54"/pixel. While at 1.27"/pixel there is a bit of oversampling, I would not consider it deal breaker in choosing 490ex.
  17. Yes, I've run the numbers, and Adam J is correct about this one, 490ex in >2" seeing will be oversampling with ED80. Given the following: 650nm, 80mm aperture, seeing of 2" and guide error (seeing independent, so measured error in PHD can and will be larger than this) of 0.6" ideal resolution is ~1.54"/pixel
  18. Good old trial and error approach? Although distance is real number with infinite possibilities for correct distance, in the end, there is finite combination of spacers and resulting distances - just go thru a set of spacer combinations, for each combination do really precise focusing of star in center of the field and examine star shapes in the corners (and calculate FWHM value). Choose distance that gives best looking stars / the least FWHM value for corner stars. You can go a step further and plot your FWHM values against the distance - interpolate samples and see what value of distance would give minimum for such graph?
  19. I would personally go with 490ex, given that they both have same diagonal (16mm), they will both provide same FOV with respective lenses, only different resolution (and I'm a fan of high res imaging) - I'm assuming here that you will use following: 200mm lens, ED80, ED80 with 0.85 reducer / corrector. Here is a quick break down of resolutions: 490ex: 200 mm - 3.81"/pixel 510 mm (ed80 + x0.85 reducer) - 1.49"/pixel 600 mm (ed80) - 1.27"/pixel 460ex: 200 mm - 4.68"/pixel 510 mm - 1.84"/pixel 600 mm - 1.56"/pixel Now there are couple things to consider here, given that both cameras have same read noise, and pretty much the same dark current: You will need better guiding for 490ex + ED80 combination given the resolution, which is in ball park of 0.6" - 0.7" RMS combined to fully exploit such resolution - HEQ5 is capable of this, but usually requires mods (like tuning / replacing bearings and belt mod). It also has less full well capacity, meaning that you will need to go a bit shorter exposures in order not to saturate stars. With 460ex you will be able to do longer exposures due to well depth, but again it will put strain on your guiding, and combined RMS should be in range of 0.7" - 0.8" (so just a bit looser than with 490ex). In general, you should not be concerned with seeing limitations, as both cams with such FLs can be used in all but worst seeing. From what I can find on internet, both chips have about the same QE, so that can be omitted when making a decision. So in general you can expect more data to store and process from 490ex vs 460ex (more pixels per sub, and possibly more a bit shorter subs) - if this is concern to you, take it into account. Only other thing to consider is if one of the chips has some strange behavior that you would like to avoid (like strange artifacts, does not calibrate well, or whatever - I honestly have no idea if that is the case, but I doubt that one of these chips has such issues). And then, of course, there is the price difference, but I'll leave that to you ... HTH
  20. Not really sure what you are asking, but let me try to help. If you are set on making a mosaic (multiple "panel" image stitched together), first make sure that you have proper tiles for making mosaic. This means that framing for each tile needs to be such that there is slight overlap with adjacent tiles. If you decide on 4 tile mosaic this would mean that you need top-left, top-right, bottom-left and bottom-right tile. Top-left needs to overlap with top-right and bottom-left panel. Good overlap would be somewhere around 10% of frame - it really needs to have enough features to properly align and stitch tiles together (enough stars, similar to stacking - you can only stack if there are enough stars to align subs). Also, make sure that there are no holes in area covered with tiles. Depending on equipment and software that you are using, there might be tools to help you frame / position your scope to take tiles. Here is an example image form EQ Mosaic software (part of EQMod suite) as an example how to position tiles: Now, when you have captured, calibrated and stacked your subs for each tile - you need software to do the stitching. Don't know any software that will do it out of the box (I'm sure that there has to be some, I just have not found it). I use ImageJ and MosaicJ plugin. It involves one more step prior to stitching and that is background equalization (making sure that background levels are the same, I do my imaging in quite heavy LP and background levels vary with part of the sky, so it is more than likely that tiles will have different background levels depending on time that they were recorded at). Once you have your image stitched it will be almost 4 times the size of original (2x2 minus overlap) - now you can choose if you wish to keep the larger image size or you want to reduce your image. If you opt to reduce image size - it is best to use binning rather than "plain resize" - this will improve your SNR. Depending on your choice final image will either have original resolution (if you do not bin mosaic), or it will have two times less resolution than your setup is providing if you opt for 2x2 bin (if you shoot for example at 1.5"/pixel with 4000x3000 pixel sensor - mosaic will either have 8000x6000 size with resolution of 1.5"/pixel if you leave it as is, or 4000x3000 size with resolution of 3"/pixel if you bin it 2x2). You can bin it even higher if you have a large sensor and bin / reduce size anyway when you process single stack. In mosaic binning will of course have benefit of SNR increase and due to lower resolution, it will also provide "tighter" stars in pixel units (they will still have same FWHM in arcseconds, but since you are using more arcseconds per pixel - in pixel terms stars will be smaller). At the end - just process mosaic as you would normally process single stack. HTH
  21. I've been away for some time, but right before I went, I was entertaining idea of using a cheap plossl eyepiece as a source of collimation lenses, similar to @robin_astro junk box arrangement, but with only two simple doublets from disassembled plossl one acting as eyepiece part and other one acting as regular lens before camera part. This is of course based only on looking at plossl eyepiece design graphs - showing 2 doublets - have no idea of actual focal lengths of those lenses and if they are the same, or if not the same FL, whether they can still be used in this arrangement - which is basically: take cheap plossl - saw it in half, insert SA between first and second lens group and adjust distances of lens groups so that beam between first and second group is collimated. There is always an option of buying actual lenses (and even really cheaply from places like surplus shed) but I'm wondering how difficult it would be to mount everything together, ideally there would be some sort of lens mounts with T2 threads on both ends to do that, but I don't think I found anything like that cheap online.
  22. If using Sharpcap - look at control panel on the right, just above resolution there is "color space" or similar named option - you should user RAW8 for high speed capture (like planetary and such). RAW16 is higher bit setting (so smaller max frame rate, but useful for flats or stuff like that, actually depending on your setup, and what you plan to shoot - how bright it is, you might benefit of using RAW16 mode - but you need to check if signal on single frame and your selected exposure exceeds value of around 500 after dividing it with 16 - so if you inspect your 16bit frame in some software and you get signal values higher than 8000, for unity gain than it is worth using RAW16 mode). Somewhere down after gain setting there is turbo USB setting - try increasing that as long as capture is running smoothly and frames are not blank (if you get empty frames or capture stalls this means you pushed it to far for your computer, but I suspect that with i7 you should be able to push it all the way up). Also there is "High speed mode" (at least I think so, it's been a while since I last used it for planetary) set that to 1 / On. Make sure you haven't turned on "Frame rate limit" (also one of options) on - it should be set to Max possible. HTH
  23. Hm, looks like you are either using USB 2.0, 12bit mode, or have USB speed setting wrong. According to specs for ASI174 (128fps in 12bit mode, 164fps in 10bit mode) I would expect at least 100fps on i7, that would be of course with usb 3.0 and appropriate usb speed setting.
  24. Yes it should be as close to "whole" number as possible, a bit "bigger" is OK if rounding down (I suppose that is rounding method used). So 1.001 is also fine and it will introduce error only when signal is 1000 and larger if rounding down (1000 x 1.001 = 1001, but 999 x 1.001 = 999.999 = 999 (rounded down) ), but then again signal is so large here that SNR is not affected much. If we take 139 to be 1 (or a bit larger, if I'm not mistaken SGP writes in FITS header some value a bit larger than 1, let me check, yes here it is : EGAIN = 1.0011096841452 / Electrons Per ADU) then because log scale used for gain, multiplication is done by addition, and we calculated that x2 is +60 so close to "whole number" should be: 139, 139+60 = 199, 139 + 120 = 259, 139 + 180 = 319, ... and also to the other side, so 139 - 60 = 79 Now there is one important thing to note about gain - raising gain lowers dynamic range because fixed bit output (12bit ADC). So you have roughly 4000 "units" or "slots" to map electron count onto (12bit is 4096 but some of the range is "lost" due to offset). If we use gain 199 and have e/ADU of 0.5 then range 0-2000 will map to 0-4000 - effectively letting you capture max 2000e per pixel. Same goes for Gain 259 - here you cut your "full well" capacity to only 1000e. Now, going to the other side if you lower gain, you get bigger dynamic range / increase pixel capacity, so for gain 79 you will be able to record 0-8000e range but at an expense. If you map 0-8000e range to 0-4000 numeric range you can see that you are loosing a bit of precision, you are cutting off the least significant bit of data ( in this case you will round your data to even numbers, or numbers divisible by 2). This is also introducing quantization noise into subs, but it is not bad as it may seem at first. Here is what happens: By using gain 79 you are increasing dynamic range, adding a bit of quantization noise (bad thing), but it enables you to do longer exposure because of increased dynamic range. So single sub will have higher SNR to start with if you exploit additional dynamic range so quantization noise won't do as much damage. Other thing that it is important here is that quantization noise that is less then other noise sources gets "masked" by them, and to certain degree combined noise still behaves like Gaussian noise (remember that read noise is around 1.6e, and quantization noise due to loosing LSB is 1e - actually on average less than that, it is 0.5 - half values will not have noise added, that is values that are already even to start with, and odd values will have 1e of noise added, so on average it will be 0.5). As wiki states (a good page on quantization noise): https://en.wikipedia.org/wiki/Quantization_(signal_processing)#The_additive_noise_model_for_quantization_error " One way to ensure effective independence of the quantization error from the source signal is to perform dithered quantization (sometimes with noise shaping), which involves adding random (or pseudo-random) noise to the signal prior to quantization.[8][14] This can sometimes be beneficial for such purposes as improving the subjective quality of the result, however it can increase the total quantity of error introduced by the quantization process. " So read noise can be seen as having random noise added to signal prior to quantization. So while using gain of 79 can be OK, it would need to be used with longer subs in order to be "OK" , this of course means, bigger probability of ruined sub due to wind, poor tracking, airplane / satellite passing by, etc ... That is why I personally would not use it. SNR gained by longer exposure is also dependent on LP, I do my imaging in heavy LP (18mag skies) at the moment, so there is no great benefit of going long subs for me. It might be different for you though.
  25. In regards to PNG, I think it should be able to do single channel / greyscale images, it might be that software loading the PNG files interprets its contents in certain way. It might also be that SharpCap settings were such that it actually recorded PNG as color or something. Fits is just simpler to work with (it does have its drawbacks like not standardized multi channel support). Gain settings are not as straight forward as multiplying by 2 (and no it is not binary OCD at this point, there is real explanation why using certain levels of gain is better ). Let me quickly explain gain with ZWO cameras and how it works. It uses log scale / decibel system to set conversion factor. If you look at ZWO website, there are graphs for conversion factor depending on gain setting. You will see that Gain is expressed as 0.1db - which means that Gain of 139 is 13.9db amplification. Now if you look at the graph it shows that e/ADU for 0 gain is around 5. If we take the log formula for decibels and power ratios you will see that: P = 10 ^ (Gain / 20 ) * P0 (10 to the power of Gain in decibels, in this case it is 13.9 because gain is 139 in 0.1db units divided with 20 and multiplied with base power) - that gives factor of ~ 4.95 so e/ADU is roughly 5 times in difference between gain 0 and gain 139 (that is what graph shows, from 5 to 1, for gains 0 and 139 respectively). So if you want to go with ADU factor of 2 (e/ADU of 0.5) it should be at gain: 20*log(2) in decibels or in 0.1db units 139 + 60 which would give roughly 199 Gain for 0.5 e/Adu Now you can't really tell from graph, only to approximate level but at gain 200 it looks like e/ADU is 0.5 which is just what we calculated. Why bother with all of this at all? It is due to quantization noise which is bad kind of noise. It is bad because it does not follow either Poisson or Gaussian statistics, and stacking whilst it will increase SNR, will do it in way that you cannot calculate result (you can't tell what SNR increase will be depending on number of stacked frames). And of course any additional noise is bad. I will quickly demonstrate what it is all about. When you record your frames, they will be recorded in integer format - meaning only whole numbers will be recorded. It is also interesting to note that photons are discrete by their quantum nature - meaning they don't come in fractions, but rather whole number of photons hit pixel. So for example if 15 photons hit pixel during exposure and you have your e/ADU set to 1 this will produce 15ADU count for that pixel - this is fine you have exact number of photons (electrons). But if you use for example such Gain setting that conversion factor is 1.5 it will interpret 15 photons (or rather electrons) as having numerical value of 22.5 but .5 can't be written using integer numbers so you have to round it (up, down it does not matter as long as you are consistent) - so camera produces output of 22. You now want to know how many photons (electrons) it was to start with and you divide 22 with 1.5 - you end up with 14.6666.... not original 15. So in this particular example we added 0.3333e worth of noise to original signal. By simply choosing right gain we can avoid this, so why not use it? Now on to DSS and precision. I stopped using DSS for two reasons - first is that it would randomly crash on large files because of memory issues (try sigma clip stack on many large files, like you would with 1m exposure for ASI1600, and 32MB single file to see it happen). Second was my suspicion that it uses 16 bit math to do frame calibration (although it produces 32bit stack). Now here is important bit. Although frames that we record with ASI1600 have 12bit precision (0-4095 range of values), we get good results for calibrating those frames with large number of darks for master dark and large number of flats for master flat. Whenever you add or take average of number of frames you increase bit precision that is needed in order not to introduce more noise. For example if you have 12bit darks and you take 16 of them you are fine with 16 bit precision, but if you use 64 of them then total precision needed for resulting master is 12bit to start with and 6 bit of precision for 64 frames added together - you are already operating with 18bit precise data here - write it in 16bit and same thing happens as with gain above - you will loose 2 lowest bits and hence introduce noise into the mix. Can we help it? Yes, rather simple solution - use 32 bit precision for all calculations. If it is so easy why not do it? DSS was fine for number of years because CCD sensors - one simply uses much less frames with CCDs than with CMOS (less memory requirements and less error from not using high precision math). You might even notice that DSS recommends to use median stacking for large number of frames (median will work same both in low precision and high precision math - it will yield same result). To address your last sentence. No, if you use average stacking method (usual one, not median or something else), each time you add another frame you increase required precision to fully store result. Think of it this way: 2 bits of data represents integer range of 0-3. Add two numbers in range 0-3 together and in general case you will get result 0-6. You need 3 bits to store that precisely because 3 bits covers range 0-7. So for each addition you need another bit of precision. Another way to thing about it is with ordinary numbers. Add two single digit numbers together and you might need two digits to write down result (7+6 = 13). Same thing happens when you use average instead of sum (this is part when my binary OCD kicks in, power of 2 number of frames gives exact values when using average - much like when you divide with 10 in regular digits - you have same digits only decimal point changes place). Hope this explains things a bit better (it might look overwhelming, but it is really simple, if you don't get it fully, it is probably because the way I explained it, not because it is complicated).
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.