Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

vlaiv

Members
  • Posts

    13,091
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. Diameter of Airy disk for 180mm scope is 1.43". Optimum sampling rate would therefore be: ~0.3"/px. With ASI178 you will be at 0.18"/px. That is over sampling by almost double. This is perfect if ASI178 is color model, and if it is mono - I would recommend that you bin x2.
  2. Calibrate at around 0 DEC.
  3. I like the one where scale is glued on the bottom part and top part has part of it cutout and hand is made from simple piece of wire. Let me see if I can find a picture of it. Something like this: This one does not appear to have moving hand - but there is plenty of room to accommodate one. That should help with initial setup - base needs to be pointed in rough direction of north and then hand can be adjusted on a star or similar.
  4. Precision of setting circles was always a questionable thing. Suppose that you can accurately read off half a degree of that scale. This is probably the most precision you can get as diameter of the base is probably around 50cm. This means that circumference is about 157cm and single degree is about 4mm. With 32mm plossl you'll get x37.5 magnification or about 1.33 degrees TFOV. In principle - you should be able to put a target somewhere in FOV of your finder eyepiece. Similarly, regular 7x50 finder has something like 5 degrees TFOV and if you put Polaris in the center - you'll be in principle within one degree from NCP. That should still provide enough precision to land a target in FOV of finder eyepiece. BTW - you can make a slight tweak to above design that will enable you to roughly maneuver mount in position and then adjust settings circle arm to "zero it in". Just attach it in such way that it can move/rotate slightly to provide you with couple degrees of adjustment (remember ~4mm is one degree so you really need only a few centimeters of motion).
  5. First azimuth: Scale is on top circle but hand is fixed to bottom one. Scale rotates and hand is fixed: You setup your scope so that it is pointing north and hand is pointing to 0 degrees on scale. Same is for altitude. You will need some sort of application on your phone / tablet or something similar to convert RA/DEC coordinates of the object to Alt/Az coordinates for your location at a given time (calculation is not trivial - so either pre print some tables of times for each object or use electronic aid).
  6. No, not needed at all. It is doing small amount of "damage" to your imaging. Not much. Such filters usually have something like >95% transmission, so you are loosing about 5% of light or similar (some have 98% transmission while others 90(ish)% - depends on actual filter). Any optical surface will introduce some aberrations and it depends on quality of surfaces just how much it will be. Some filters cause star bloat or not perfectly round stars - but most are ok and have minimal impact. Depends - if you are happy to accept small light loss and don't have any significant image degradation (and I'm guessing not since you use it with LRGB anyway), then this can outweigh hassle of removing filter from imaging train every time you switch between LRGB and NB. If it's not too much work to remove it - then why not as it is not needed.
  7. AstroImageJ will do FWHM - not really small but free for sure.
  8. That approach is viable with CCD sensors and it is good idea to shoot only the longest darks if you use multiple exposures and scale them for shorter ones as longest darks will have the least overall thermal noise (although it seems that the longest dark will gather the most thermal current and hence associated noise - it will be scaled and noise will be scaled as well - signal scales with time but noise with square root of that). With cmos sensors it usually does not work as there are issues with bias files. I've tried 5 different CMOS cameras so far and not one had usable bias subs (for dark scaling). Another place bias can be used is if you don't have set point cooling on your camera. In that case, it is good idea to try dark scaling (algorithm that tries to scale master dark to match that of sub it is calibrating) - again bias needs to be removed prior to dark scaling as bias signal does not depend on temperature. Interestingly enough, DSLR CMOS sensors don't seem to be suffering from bias issues like dedicated astro CMOS cameras (or I could be wrong there).
  9. I would like to further discuss this and similar approaches to star removal if you are up for it. Main issue that I see with star removal is the fact that: 1. Stars often have significant signal, hence shot noise is much stronger than background nebulosity signal. Subtracting star profile from image will not leave background signal but rather noisy patch in place where star used to be 2. Some stars are clipping so we don't really have good means to produce star profile in that spot - even clipped one (not all stars in all subs will be equally clipping and stacking algorithm needs to be aware of clipping in order to produce good data). So far I have identified some important points in star removal process. - Understand per pixel SNR. This is actually much easier than most people believe. Only problem is light pollution - as it is unwanted signal that needs to be remove before we can produce per pixel SNR. There are a few more "gotchas" that need to be addressed here, but basic idea is as follows: We stack a number of subs, and we produce mean value as signal output. We can use stddev stack for noise component. Simple approach would be to take signal, subtract light pollution to represent our signal strength and then use stddev stack divided with square root of number of stacked subs to get noise part. Of course, if we don't use simple average, we need to modify noise part accordingly (like weighted standard deviation, removal of pixels due to sigma clip - we don't have same number of samples for each pixels and so on - these would be "gotchas" mentioned previously). - In stacking method, we would produce clip map of our stack as well. This helps create accurate star profiles - in case of clipping we would know that and use only non clipped pixels in final stack, or do HDR composing or whatever and when creating star profile we would automatically set clipped pixels to NaN. In the end, when we remove stars - by subtracting profile for each star - we examine remaining signal value and if it is below certain threshold - like remaining signal / original noise < 3 we place NaN value in that place. We similarly place NaN values in all clipping pixels. What remains is to "fill in the blanks", or reconstruct NaN values somehow. There are several algorithms to do that, but I'm wondering if this algorithm that uses connectivity can be used/modified and what sort of results it would produce?
  10. Pay no attention - just a nerd checking in ....
  11. Both yes and no. Let me explain. In principle you should aim for histogram being the furthest left without being clipped - but this is important only if you are doing single exposure. It increases dynamic range of a single sub. Since we are using stacking and multiple exposures - that in itself increases dynamic range. We can also use above trick to deal with any saturation. For that reason - putting offset a little bit higher than optimum value - does not really matter that much. I use offset 64 with my camera, while most people use offset 50. Not much difference really. In fact, there is much more difference in using 14bit over 12bit and 16bit over 14bit (x4 in histogram space to the right and here you only subtract something like 1% by putting histogram a bit more to the right) - but again, people are happily using 12bit cameras to produce excellent images.
  12. Clipping to the right is not the problem, or at least, not the problem that can't be solved easily. If you have clipping to the right, which means that some parts of the image are saturating (like star cores - they almost always saturate, but sometimes even very bright nebula / galaxy parts) then take just a few "filler" short exposures at the end. Something like 10-15s will do - short enough so that even bright stars don't saturate. Just a few is enough because you'll be only using very high signal parts of these short subs - which means SNR will be good as is. Stack both sets of subs to their respective stacks and then replace saturated parts of regular stack with short one (make sure you do scaling right if you replace while still linear).
  13. Yes, I see, I did not mention "gain value" being ok, but rather offset value being good. Good offset value for a given gain is one that has no pixels clipped to the left - no pixels have minimum value that camera produces (because if it is minimum value, there is no way of telling if it is really minimum value or just clipped to minimum value). If you read from the beginning of the thread - you'll find instructions on how to check this. Don't use unity gain with this camera either - it has very large read noise for CMOS camera. Look at this diagram: Unity gain, being gain 117 is still in large read noise mode. This seems to drop to nice levels at around gain 120 - that is why I recommend gain 120 for this camera. This could have been the problem that produced noisy result if you used unity gain - 117. Other possible causes would be - low transparency / higher than usual light pollution. Lack of astronomical darkness maybe? Was there moon out?
  14. It should create less noise, or rather - noise should be better looking. It should not affect levels of noise - just "shape" of it. You'll have to provide a bit more context. I have no idea when I said that, or what in relation to. Can you point me to exact sentence?
  15. Here is another example of full EQ type mount: https://www.thingiverse.com/thing:2636470
  16. vlaiv

    WIERD!!

    I'm glad you decided to process your data after all. Nice image!
  17. I understand. Yes, OAG is a good option for deep sky imaging / long exposure. ASI178 is going to be good camera for planetary imaging but not for DSO imaging as FOV is going to be very small. With such scope, you want as large sensor as possible, so Canon 80D is sensible option. You will want to also look at binning as part of your processing workflow when you start DSO imaging (2700mm FL with almost any pixel size is going to lead to oversampling and you'll want to bin your pixels - in software if needed to circumvent that).
  18. I'm confused here. Are we talking about planetary type imaging with Skymax? No OAG is needed for that. You don't need guiding. Mount is capable of holding planet in camera FOV even at very small ROI (320x200) - and that is all you need.
  19. I was thinking about 3D printed star tracker as well - as a project. Still don't own a 3D printer, but it's on my shopping list (hence all the thinking about possible projects). Estimating error of such tracker is not easy task. Size of it and how much parts you are going to print as opposed of using pre made metallic parts (such as shafts and bearings) contribute to precision. Then there is a matter of type of drive used. Are you going for worm arrangement or belt drive? Parts for belt drive are much easier to print on 3D printer. Let's see what sort of precision in tracking you want in the first place. We start by setting some basic constraints - image sampling rate, max exposure length and acceptable star eccentricity. You mention 200mm FL and let's take very common pixel size - 4.5um. This gives ~4.6"/px as sampling rate, so let's put our constraint at 4"/px. We want to be able to do 5min exposure and have our eccentricity less than say 30%? Well if our FWHM is about 6", then we want our error in RA over 5 minutes to be less than 6" * 0.3 = 1.8". Now we hit our first obstacle - we need to calculate what sort of error in mechanical design will give us such periodic error. We also need to know precision in our timing electronics - we need to time steps of motor such that total error over 5 minutes is less than 1.8" (or rather combined with periodic error). Let's say that we want to have 5 steps per second. What sort of reduction do we need to achieve between motor and RA shaft? If we use 1.8 degree per step motor (200 steps per revolution) with 16 micro steps and 5 steps per second timing, for 15"/s sidereal rate we have 3"/step or 0.1875"/microstep. That is quite good resolution - HEQ5 for example has 0.143617"/microstep. However, that requires quite large reduction. There is 360 * 60 * 60 = 1296000 arc seconds in full RA revolution, and there is 200 * 16 = 3200 micro steps per RA motor revolution. This means that one motor revolution corresponds to 3200 * 0.1875 = 600 arc seconds. Reduction ratio is therefore 1296000 : 600 = 2160 : 1 This can easily be achieved with one 60:1 reduction followed by one 36:1 reduction. Belt system does not seem too far fetched for this I would say.
  20. Best you can sort of do is with modern Canon cameras that have Wifi access and mounts that have wifi as well - such as AZGti. There is no all in one solution that will work out of the box. Most people here use small form factor PC computer mounted next to scope and some sort of remote desktop / VNC solution to work remotely from the comfort of their living space (this includes both wireless and wired connection). Alternative is to build an observatory with warm room
  21. vlaiv

    WIERD!!

    My only concern is that your claim can be misinterpreted as: "10h of integrated stack is worse than single 30min sub". It might be that you did not intend to say such thing, but such thing can be easily concluded from your posts, especially when you claim: and also: When in fact, what you are saying is: "To my eyes, STF of stack looks much worse than STF of single sub, and for that reason I'm going to create split image to show what both look like under same level of stretch in order not to mislead people that single sub can be better than whole stack and that science is rubbish and all that ...."
  22. vlaiv

    WIERD!!

    I think that main problem is that you expect STF to always perform the way you are used to. It might not be the case with automatic tools. You can also do manual screen stretch - one that does not alter the data but shows what is there. With auto stretching - you get what algorithm "thinks" is a good stretch, but you will do a better job, especially if you have experience in stretching the data.
  23. vlaiv

    WIERD!!

    I agree with all that you've said - no much point in going back and forth saying the same. This is why I proposed you do "split image" approach. I'm trying to explain to you that level of contrast and how much dark structures stand out - depends on contrast and level of stretch. STF did not apply same level of stretch on both images - it is algorithm that examines noise and some other features and then calculates some level of stretch. It does not mean it is best stretch to show what is in the image. Given that stack and single sub have different levels of noise - STF will stretch them differently. In order to compare them - one needs same level of stretch - and possibly simplest way to do it is to make single image out of two halves while still linear and then stretch such image. It is guaranteed to have same level of stretch and will show differences easily. Alternatively - post stack and single sub and I'll gladly combine them for you and upload still linear result - for you to stretch to your liking as well as demonstration stretch done by me. In the mean time, what do you think about this: Which version do you prefer now?
  24. vlaiv

    WIERD!!

    Not really, or rather - it depends. We have to be pragmatic about this and understand what is going on. I'll try simple analogy and then connect that with real world example in imaging. Consider this (I just realize that this may involve more mental gymnastics than I initially envisaged since we are used to different temperature units - but try to convert to F what I say in C) - Difference between half an hour and 10 hours is x20. That is sqrt(20) = ~x4.5 improvement in SNR. Now consider the difference between water that is at body temperature and one that is x4.5 colder - or about ~ 8.2C. There will be wast difference in sensation between the two. Splash 37C water on someone and they wont get too excited - maybe a bit annoyed because they are wet now. Do that with 8.2C water and it is likely that they will jump from their seat - it will feel rather cold. Drink a glass of water on one temperature and other and you will see very big difference. Swim in water with said temperatures and difference will be huge. Now, consider 8.2C and 1.82C water or maybe even between 1.82C and 0.4C. Both are very cold, both are near freezing but we don't ascribe big difference between those temperatures - but they are still x4.5 ratio in value. You know that for each additional imaging time - SNR improves like square root of time spent. If you want to double SNR, you need to spend x4 imaging time on target. At some point you enter region of diminishing returns since we can't tell difference between 50 and 100 SNR but we can tell difference between 5 SNR and 10 SNR. Notice that both are x2 improved. If I do this: Everyone will see the obvious difference between the two: - Stack is lacking hot pixels and cosmic ray artifacts - Stack is much much smother - all the grain is no longer there (SNR is indeed improved adequately for x20 longer exposure) - Images are stretched differently and stack lacks contrast - it is much "flatter" Everything seems to be as one would expect for high signal target and half an hour vs 10 hour of exposure (except the difference in stretch).
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.