Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

vlaiv

Members
  • Posts

    13,106
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. Both yes and no. Let me explain. In principle you should aim for histogram being the furthest left without being clipped - but this is important only if you are doing single exposure. It increases dynamic range of a single sub. Since we are using stacking and multiple exposures - that in itself increases dynamic range. We can also use above trick to deal with any saturation. For that reason - putting offset a little bit higher than optimum value - does not really matter that much. I use offset 64 with my camera, while most people use offset 50. Not much difference really. In fact, there is much more difference in using 14bit over 12bit and 16bit over 14bit (x4 in histogram space to the right and here you only subtract something like 1% by putting histogram a bit more to the right) - but again, people are happily using 12bit cameras to produce excellent images.
  2. Clipping to the right is not the problem, or at least, not the problem that can't be solved easily. If you have clipping to the right, which means that some parts of the image are saturating (like star cores - they almost always saturate, but sometimes even very bright nebula / galaxy parts) then take just a few "filler" short exposures at the end. Something like 10-15s will do - short enough so that even bright stars don't saturate. Just a few is enough because you'll be only using very high signal parts of these short subs - which means SNR will be good as is. Stack both sets of subs to their respective stacks and then replace saturated parts of regular stack with short one (make sure you do scaling right if you replace while still linear).
  3. Yes, I see, I did not mention "gain value" being ok, but rather offset value being good. Good offset value for a given gain is one that has no pixels clipped to the left - no pixels have minimum value that camera produces (because if it is minimum value, there is no way of telling if it is really minimum value or just clipped to minimum value). If you read from the beginning of the thread - you'll find instructions on how to check this. Don't use unity gain with this camera either - it has very large read noise for CMOS camera. Look at this diagram: Unity gain, being gain 117 is still in large read noise mode. This seems to drop to nice levels at around gain 120 - that is why I recommend gain 120 for this camera. This could have been the problem that produced noisy result if you used unity gain - 117. Other possible causes would be - low transparency / higher than usual light pollution. Lack of astronomical darkness maybe? Was there moon out?
  4. It should create less noise, or rather - noise should be better looking. It should not affect levels of noise - just "shape" of it. You'll have to provide a bit more context. I have no idea when I said that, or what in relation to. Can you point me to exact sentence?
  5. Here is another example of full EQ type mount: https://www.thingiverse.com/thing:2636470
  6. vlaiv

    WIERD!!

    I'm glad you decided to process your data after all. Nice image!
  7. I understand. Yes, OAG is a good option for deep sky imaging / long exposure. ASI178 is going to be good camera for planetary imaging but not for DSO imaging as FOV is going to be very small. With such scope, you want as large sensor as possible, so Canon 80D is sensible option. You will want to also look at binning as part of your processing workflow when you start DSO imaging (2700mm FL with almost any pixel size is going to lead to oversampling and you'll want to bin your pixels - in software if needed to circumvent that).
  8. I'm confused here. Are we talking about planetary type imaging with Skymax? No OAG is needed for that. You don't need guiding. Mount is capable of holding planet in camera FOV even at very small ROI (320x200) - and that is all you need.
  9. I was thinking about 3D printed star tracker as well - as a project. Still don't own a 3D printer, but it's on my shopping list (hence all the thinking about possible projects). Estimating error of such tracker is not easy task. Size of it and how much parts you are going to print as opposed of using pre made metallic parts (such as shafts and bearings) contribute to precision. Then there is a matter of type of drive used. Are you going for worm arrangement or belt drive? Parts for belt drive are much easier to print on 3D printer. Let's see what sort of precision in tracking you want in the first place. We start by setting some basic constraints - image sampling rate, max exposure length and acceptable star eccentricity. You mention 200mm FL and let's take very common pixel size - 4.5um. This gives ~4.6"/px as sampling rate, so let's put our constraint at 4"/px. We want to be able to do 5min exposure and have our eccentricity less than say 30%? Well if our FWHM is about 6", then we want our error in RA over 5 minutes to be less than 6" * 0.3 = 1.8". Now we hit our first obstacle - we need to calculate what sort of error in mechanical design will give us such periodic error. We also need to know precision in our timing electronics - we need to time steps of motor such that total error over 5 minutes is less than 1.8" (or rather combined with periodic error). Let's say that we want to have 5 steps per second. What sort of reduction do we need to achieve between motor and RA shaft? If we use 1.8 degree per step motor (200 steps per revolution) with 16 micro steps and 5 steps per second timing, for 15"/s sidereal rate we have 3"/step or 0.1875"/microstep. That is quite good resolution - HEQ5 for example has 0.143617"/microstep. However, that requires quite large reduction. There is 360 * 60 * 60 = 1296000 arc seconds in full RA revolution, and there is 200 * 16 = 3200 micro steps per RA motor revolution. This means that one motor revolution corresponds to 3200 * 0.1875 = 600 arc seconds. Reduction ratio is therefore 1296000 : 600 = 2160 : 1 This can easily be achieved with one 60:1 reduction followed by one 36:1 reduction. Belt system does not seem too far fetched for this I would say.
  10. Best you can sort of do is with modern Canon cameras that have Wifi access and mounts that have wifi as well - such as AZGti. There is no all in one solution that will work out of the box. Most people here use small form factor PC computer mounted next to scope and some sort of remote desktop / VNC solution to work remotely from the comfort of their living space (this includes both wireless and wired connection). Alternative is to build an observatory with warm room
  11. vlaiv

    WIERD!!

    My only concern is that your claim can be misinterpreted as: "10h of integrated stack is worse than single 30min sub". It might be that you did not intend to say such thing, but such thing can be easily concluded from your posts, especially when you claim: and also: When in fact, what you are saying is: "To my eyes, STF of stack looks much worse than STF of single sub, and for that reason I'm going to create split image to show what both look like under same level of stretch in order not to mislead people that single sub can be better than whole stack and that science is rubbish and all that ...."
  12. vlaiv

    WIERD!!

    I think that main problem is that you expect STF to always perform the way you are used to. It might not be the case with automatic tools. You can also do manual screen stretch - one that does not alter the data but shows what is there. With auto stretching - you get what algorithm "thinks" is a good stretch, but you will do a better job, especially if you have experience in stretching the data.
  13. vlaiv

    WIERD!!

    I agree with all that you've said - no much point in going back and forth saying the same. This is why I proposed you do "split image" approach. I'm trying to explain to you that level of contrast and how much dark structures stand out - depends on contrast and level of stretch. STF did not apply same level of stretch on both images - it is algorithm that examines noise and some other features and then calculates some level of stretch. It does not mean it is best stretch to show what is in the image. Given that stack and single sub have different levels of noise - STF will stretch them differently. In order to compare them - one needs same level of stretch - and possibly simplest way to do it is to make single image out of two halves while still linear and then stretch such image. It is guaranteed to have same level of stretch and will show differences easily. Alternatively - post stack and single sub and I'll gladly combine them for you and upload still linear result - for you to stretch to your liking as well as demonstration stretch done by me. In the mean time, what do you think about this: Which version do you prefer now?
  14. vlaiv

    WIERD!!

    Not really, or rather - it depends. We have to be pragmatic about this and understand what is going on. I'll try simple analogy and then connect that with real world example in imaging. Consider this (I just realize that this may involve more mental gymnastics than I initially envisaged since we are used to different temperature units - but try to convert to F what I say in C) - Difference between half an hour and 10 hours is x20. That is sqrt(20) = ~x4.5 improvement in SNR. Now consider the difference between water that is at body temperature and one that is x4.5 colder - or about ~ 8.2C. There will be wast difference in sensation between the two. Splash 37C water on someone and they wont get too excited - maybe a bit annoyed because they are wet now. Do that with 8.2C water and it is likely that they will jump from their seat - it will feel rather cold. Drink a glass of water on one temperature and other and you will see very big difference. Swim in water with said temperatures and difference will be huge. Now, consider 8.2C and 1.82C water or maybe even between 1.82C and 0.4C. Both are very cold, both are near freezing but we don't ascribe big difference between those temperatures - but they are still x4.5 ratio in value. You know that for each additional imaging time - SNR improves like square root of time spent. If you want to double SNR, you need to spend x4 imaging time on target. At some point you enter region of diminishing returns since we can't tell difference between 50 and 100 SNR but we can tell difference between 5 SNR and 10 SNR. Notice that both are x2 improved. If I do this: Everyone will see the obvious difference between the two: - Stack is lacking hot pixels and cosmic ray artifacts - Stack is much much smother - all the grain is no longer there (SNR is indeed improved adequately for x20 longer exposure) - Images are stretched differently and stack lacks contrast - it is much "flatter" Everything seems to be as one would expect for high signal target and half an hour vs 10 hour of exposure (except the difference in stretch).
  15. vlaiv

    WIERD!!

    Why don't you just try what I suggested to see if it will make any difference to your perception of quality of each image?
  16. Welcome to SGL. This has been discussed number of times before, so check out these threads: There is also calendar entry: (you can access it from home page or via Browse / Calendar menu item.
  17. vlaiv

    WIERD!!

    Not weird at all. You are relying on STF to provide you with same level of stretch - and it is not happening. Stacked data is lacking contrast due to different level of stretch and that does not look nice. If you really want to see the difference - you need to do "split screen" approach. Copy half of still linear single sub data and paste it over still linear stack (have them aligned of course so they form proper image after copy/paste operation). Then proceed with any stretching / processing. This will make obvious difference for given level of processing as both sides of the image will undergo same processing.
  18. It won't be a problem even at 50ms if you use low guide speed. It is high guide speed that will cause issues even at 20ms (you can't issue a pulse lasting less than that, but even at 2"/px guide resolution for MinMo of 0.1 you will need to go lower than 20ms - at 14.8ms to get good correction and not overshoot - with 20ms you'll overshoot by 25%. I'll let you do the math on 0.5"/px guide resolution).
  19. I can see that pdf guide is outdated and that it is indeed possible to set this value lower - but default is still 50ms and it is still doing the same. It is rather obscure setting and most people have not heard about it - hence it will be set to 50ms for most if not all people anyway. I'm sorry to say but logic that PHD2 Developer applied is flawed - let's just see what happens if we use that line of reasoning. If we use 6"/px guide scale, which is according to the developer - too coarse anyway, and MinMo of 0.1 we will get 0.6" threshold of movement (if you want to guide at 0.5-0.7" RMS this is not acceptable, but let's go with it) - indeed at 13.5"/s it will take 44.444ms for correction to happen - and it won't be possible as minimum correction is 50ms. We lower our minimum correction to 20ms and all is fine - no more error. However, like developer mentioned, most people don't use such coarse guide resolution. Let's keep 20ms as minimum pulse width and use better guide resolution - let's say 2"/px? We still have MinMo set at 0.1. Now we have 0.2" as our minimum correction and at guide speed of 13.5"/s that is 14.8ms - again below even lowest possible setting of 20ms. Using finer guide resolution just makes things worse with respect to this issue. If you have lowest pulse duration and you want to do precision guiding with EQMOD, you need to lower correction speed. That is only sensible way to do it. Manual further states: It's talking about setting guide rate "too low". Is this true? Let's see on an example. If we set our correction speed to be fairly low - x0.3 sidereal or about 4.5"/s and need to make very large correction - 4" in a single cycle, can we do it? It turns out that we can. Even with very short guide cycle of 1s (I advocate using at least 2-3s as a guide cycle to smooth out seeing influence and stabilize the mount), it will still have enough time to apply correction as it will take less than 1s to do correction of 4" at speed of 4.5"/s (in reality it can take even longer than this because camera exposure is stopped until correction is finished - it does not "fire" on every second like a clock - I just wanted to point out that correction is shorter than a guide cycle even for very large correction and one should not worry about chasing the error rather than correcting it).
  20. Here it is from their PDF documentation: I'm quite certain that one can accurately measure much shorter periods of time, but this is what they say (and indeed, long time ago in Win95 and similar operating systems, when using simple timer component, one was limited to that sort of timer resolution), however things have changed and now one can measure in nano seconds (processor tick count).
  21. My recommendation for lower guide speed is based on following: - there is a setting in EQMod primarily kept for legacy reasons - minimum pulse duration. It is there because at the time, systems were not able to time properly intervals less than that. It is set at 50ms. This limits minimum correction that one can make, depending on correction speed. With 13.5"/s guide rate, that equates to 13.5"/s * 50ms = 0.675" That is minimum correction that you can make. This alone gives you something like 1.3" of error in each guide cycle. If you are off the target by small amount - you will overshoot by 0.675" - error - in all likelihood that will be at least 0.4-0.5 for OAG as it can measure error down to 0.1". In most cases it will be due to seeing. MinMo is set to 0.51 for both axis. This parameter is still set in fraction of a pixel, so that works out to be 0.2" or there about. Any error in that range will trigger correction - and likely overshoot - that will most likely happen due to seeing. - Heavy scopes on lighter mount have quite a bit of inertia. Using high guide speed will result in them wanting to keep going further then they ought to by correction. For this reason - one must use slow correction speed so that energy in the system is low.
  22. Yes, my recommendation was due to the fact that I was experiencing rather fast ripple in tracking performance - 13.8s one, related to period of single tooth on motor gear. It was due to improper meshing between motor gear and belt. I'm not seeing that in your guide logs - not yet anyway, it could turn out to be an issue after you get good guide results, but for now, if it's there, it is masked by much larger issues. Here are some recommendations that I suggest you try: 1. Bin your camera pixels at least x3 or even x4. There is plenty of them and you don't need to go as low as 0.43"/px for guiding. My camera gives around 0.48"/px and I bin it at least x2 to get close to 1"/px (OAG and 1600mm FL). With x4 bin you'll be at 1.72"/px and that is good enough for best performance possibly offered by HEQ5 (which is around 0.5" RMS). 2. Use ASCOM driver for your 290 camera instead of native drivers - this will enable 16 bit readout mode and improve centroid precision and star SNR 3. You are using quite fast correction speeds. I think you would benefit from lowering those. You have it set at x0.9 sidereal (13.5"/s). Consider lowering it to something like x0.3-x0.4. HEQ5 type mount responds much better to slower/longer corrections than fast short ones. 4. Consider using dark library / dark calibration with PHD2 5. If you think that seeing is a problem - maybe experiment with guide exposures up to 4s long
  23. And always seem to be out of stock with time to delivery being 1+ months.
  24. What FF/FR is usually recommended for Esprit 100 and what is resulting focal ratio?
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.