Jump to content

vlaiv

Members
  • Posts

    13,265
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. I'm not sure. DSLR has larger pixels and 72ED has about 3 times as long focal length, right? That must be 5-6 times higher resolution. Light is much more spread over pixels. I would expect image like above to be possible after stretching 8s sub - but not without processing.
  2. What about? Derotation? The way you use it - it is perfectly fine and necessary. If you take several videos and stack each of them and then stack those resulting images - then you need to derotate. AS!3 will stack each of those videos to average feature position (or rather to feature position in reference frame it creates), and that is usually midway thru the video. Several videos will therefore produce images that are slightly rotated each and need to be derotated. Joining short videos to single long video and stacking all at once can circumvent this if total recording is short enough for aperture (or rather sampling rate - but I assume one is using critical sampling anyway - all the telescope can resolve). Feature shift is not hard to calculate. We need rotational period of planet and circumference on equator to calculate tangent speed. From that we can calculate shift in given time and from distance to earth - we can translate that shift distance into angular size. AS!3 is capable of handling at least 1-2" of distortion, so if this angular size is less than that - you are fine in time frame you've chosen - then you are fine. By the way - it is better to take one longer video (if you are within above time limit) - then several short ones - or at least to join short ones. This is because of lucky imaging approach. We never know if spell of good seeing is going to be equally distributed over whole imaging time - or perhaps better seeing will be towards beginning or the end of recording. If you stack short videos - then one of those videos might contain poor seeing - but if you stack whole video, even if you choose same percentage - it will select best subs overall not just in segment. (top n% of whole population will be better than top n% population of each partition joined)
  3. No guiding is needed for planetary and in fact - it is going to interfere with normal imaging. Just simple tracking should be enough to keep the planet in the FOV for duration of 5 minutes.
  4. You can comfortably go with longer videos. AS!3 will compensate for any rotation that happens in that time frame as any rotation that can be resolved with 4" scope will be smaller than seeing induced distortion - and AS!3 can deal with those with alignment points. I'd say - shoot 5 minute video and don't worry about derotation. It is useful when you have longer videos (for example 5 minute on C11 or C14 might already be too much as they resolve much more than 4" scope - at least x2.5 more) - or videos that are spaced apart. You won't need smaller ROI either if you use 8bit capture and 5ms as camera is capable 200+ FPS on 8bit and 800x600, but for sake of storage, unless you want to also capture moons - use 640x480 (4-5 minute video at those speeds can easily reach 20GB or more). 5ms is good exposure time for most nights and I don't think you need shorter than that.
  5. If you are doing 5ms exposure - you will be limited to 200fps by exposure. You can do smaller ROI to save on file size - but you can't get higher FPS because 1000ms / 5ms = 200fps. You can push FPS further only by reducing exposure time - and I'm not sure you'd want to do that. Whole point of planetary imaging (lucky type) is to find that sweet spot exposure - one that freezes the seeing - and not to go lower than that as SNR suffers.
  6. Check to see if you are doping 12bit capture (16 bit) instead of 10bit (8bit format), or maybe switch to 320x240 roi. Jupiter is small with that scope and you won't need large ROI anyway (unless trying to capture moons as well). Above says you should be able to even do 800x600 at 240fps - so it might be down to USB and your computer (SSD?) if everything is setup ok and you still can't get high fps.
  7. You will need to add IR/UV cut filter to that setup. ED doublet scopes, while having decent color correction - still don't fully correct for chromatic aberration. In fact - almost no refractor does over extended range of wavelengths (maybe some super apos / quadruplet lens do). For this reason you need to remove IR and UV part of spectrum (above 700nm and below 400nm) - that cause star bloating in your image - due to unfocused light in far ends of spectrum. You'll even have trouble finding proper focus like this as stars will never be pin point. https://www.firstlightoptics.com/uv-ir-filters/astro-essentials-uvir-cut-filter-125-2.html (no need to spend much on decent IR/UV filter - these are pretty much standard) As far as M31 in our frame - 8.5s is too short exposure to be able to tell if there is something there without very strong stretch (and even then, faint targets won't show up on such short exposure). Fact that light is not properly focused only makes things worse. First sort out stars with this filter to be able to focus properly and then try again to see if you are on target.
  8. As long as you don't saturate - raising gain won't do anything to the SNR - in fact, it can be beneficial. Raising gain has benefit of lowering the read noise (almost always, but there are few fringe cases where this is not true). Other than that - it has no effect on SNR (unless you saturate of course). What was our FPS? That is also important metric. Use ROI and USB3.0 connection. Make sure you have usb turbo thingy turned on in sharp cap. With ~5ms - you should be able to get 200fps. In 4-5 minute capture that will amount to 5 x 60 x 200 = ~60000 frames. Stacking only fraction of that (like 5-6% top frames) will give you stack of ~3600 - which will give you x60 SNR improvement. You say you stacked 35% of 2000 subs, right? What was your FPS and how long did you record for? With proper settings, in even one minute you should get ~10000 frames. 2000 is too low count.
  9. I would not invest in a new barlow in that case - just keep using the one you have as is.
  10. With scope of this size - you can comfortably go to 4-5 minutes. You might be slightly over sampled here - but you can change that if you have a way to change barlow to camera distance. Shortening that distance will reduce barlow magnification factor - to say x1.5-1.8 instead of x2. Try to use higher gain and shorter exposures. Set exposures to 5ms Maybe add yellow #8 filter next to UV/IR cut filter, or something like Baader contrast booster. It will help with color, but it will also remove out of focus light that is lowering contrast and possibly causing issues with stacking. As far as alignment point on the moon goes - that is good advice and to be clear - in AS!3 - click once on the Jupiter's moon to add alignment point to it. That will make it round and nice instead of smeared all over. Also - try experimenting with different alignment point sizes, and with number of stacked subs. Now that I look at list of our software - I don't see autostakkert in it? Did you stack with registax? Switch to autostakkert for stacking - you'll get better results. Do wavelet sharpening in registax but stacking itself in AS!3
  11. There seems to be some misunderstanding how seeing works and how we correct it. I'll try to explain different aspects of seeing, how they function and what sort of impact they have on image and how do we correct for that in different stages of processing. First thing to understand is that every point on the image is affected by seeing in different way. There is no single PSF like in long exposure image, where more or less all the image is blurred with same PSF which is average of the seeing over long exposure. Each point in planetary image is affected by different PSF. Following diagram will show exactly how those PSFs are formed: Each point of the image is at slightly different angle to optical axis. In the image above - I exaggerated angles and drew 3 "pillars" representing 3 points in the image. Each of those pillars is wide as our aperture - so let's say its 30cm long and it is high as much as atmosphere - like 10-15Km or more (as far as there is enough air density to be significant to cause distortion of wavefront - not sure how much is that exactly). So we have one of these skinny "tubes" for every pixel in our image. For very close points / pixel - part of these tubes overlap. This means that we won't have radically different wavefront aberration for close by points - it will sort of "morph" between close points - but points further apart will have different wavefront aberration. Wavefront arrives as a straight line to our atmosphere (red line in above image) - but due to different layers / densities of air - it becomes distorted. It is very similar effect to say mirror of a telescope not being perfect curve but having some roughness to it. From left to right I made 3 different levels of roughness in red wave fronts. Once this wave front is focused to a single point and constructive / destructive interference does its thing - we get PSF at that point. For perfect wavefront PSF is airy disk. For slightly deformed wavefront - we get slightly deformed airy disk as PSF: As wave front is deformed more - so is Airy disk: For very deformed wave front - we get complete mess: Now - all we talked about so far is "stationary" case. We have shown how each point (not the image, but each point of the image) is distorted differently. When we blur the image - we distort each point with same PSF - we don't use different one for each pixel. With seeing each point is distorted with different PSF.. We can't correct for these yet. There is no sharpening that will fix above. First thing that we can correct is geometric distortion. Wavefront can be decomposed into components (similarly how vector can be written as X axis, Y axis and Z axis component) - so can Wavefront be made up from sum of standard curves. These curves are represented by something we call Zernike polynomials (over unitary circle - but that is not important). What is important is that there is "zero" and "first" order of such polynomials - similarly to regular polynomials. Zero order represents "DC offset" - and is not important for us, but first order polynomial is slope with regular polynomials and here it is wavefront tilt. In image above we have red line representing profile of our wavefront (which is actually 2d plane but we look at just cross section here for simplicity). Orange line is general slope of this wavefront - or tilt. This tilt is responsible for moving our PSF away from its true center - for displacing it. Each point in our image will be some what displaced from its true position. Close by points will be displaced similarly (but not the same) - because of overlap of those wavefront funnels that we mentioned at the start (same pieces of atmosphere impact them). For stationary case - this creates geometric distortion. Regular grid under such distortion would look a bit like this: Remember - this is only tilt component of the wavefront aberration. There are other components that more or less distort airy disk. Tilt is only how much airy disk is displaced from its true position (that is why stars jump around in poor seeing rather than being still). This sort of seeing error is corrected by alignment points in stacking software. However - in order for that to happen - image must be "stationary" when it is recorded and we haven't yet discussed how it is changing with time. If atmosphere was completely still / frozen in time - with all its higher and lower density regions - we would have just one frame. It would be affected by seeing - it would be distorted and it would be blurred differently in different parts (remember - every point in the image has its own PSF - some are nice or not so nice airy disks and others are complete distortions of airy disk). Atmosphere changes with time and we need to understand how it changes with time - in order to get to other types of blur. Here we have diagram of how atmosphere changes with time. Vertical black lines represent our funnel - one point in the image. Different color rectangles - represent different layers of atmosphere. Some might have smooth gradient, other can contain turbulent air. Here we have image representing laminar and turbulent air flow. Laminar is smooth gradient type of air, while turbulent is full of eddies and swirls. Each of these layers can move at some speed left or right with respect to our funnel. When we have turbulent layer and it moves at some speed - so that eddies and swirls that are currently in our funnel - change significantly - so does our PSF change significantly. Say that we have wind speed of say 4 m/s, and we have 30cm of aperture - it takes 0.3m / 4m/s = 0.075s or 75ms to completely "change" contents of our funnel. If we want to change contents of our funnel for just 10% - it will take 1/10th of that or 7.5ms In 75ms - swirls and disturbances will move one whole funnel (blue is funnel at start and red is funnel at end of this time) - so contents of disturbance in funnel will be completely different - PSF will change 100%. But in 7.5ms - this will happen instead: Contents inside funnel will move very little - and almost the same deformation will be inside the funnel. PSF will change only slightly. Now - this change in PSF and time - depends on speed at which layer is moving and how distorted / turbulent layer is. If layer is mostly laminar in nature - without any disturbances, then it does not matter if it is moving fast or slow - it won't make much difference and its contribution will mostly be good airy disk. We call this steady seeing (even if some layers are moving at some speed). This usually happens when air is moving over large flat terrain without much heat sources to cause turbulent swirls. Deserts and oceans are prime candidates to create this type of laminar motion (hint Australia?). Problem is when we have turbulent air that moves at some speed. This defines already mentioned coherence time. Why is jet stream so detrimental for seeing? First because it is turbulent and second - because it can reach speeds up to 150kmh. That is 35-40m/s, or x10 faster than what we just discussed. For 300mm telescope this means that whole content of funnel will change in about 7.5ms - or if you want minimal change - you need 10% of that - or less than 1ms exposure to freeze this change. So that is second type of blur that comes from seeing - one created by our exposure and changing PSF. It is motion blur type and we need to make sure to avoid it by using short exposure for our conditions. Some sites will have laminar air flow in much of atmosphere and good seeing - and they don't have to worry as much about coherence time as PSF is mostly good and not changing at all, but if there is at least one turbulent layer in atmosphere - it is very important to figure out its speed and resulting coherence time given our aperture size. Ok, so we now need to consider third type of blur. We have said that every part of image is affected differently - each point has different PSF. Not only that - but PSF changes for each point from frame to frame. We could say that PSF changes in each of three dimensions - two dimensions of the image X and Y and also dimension of time T. In each of these dimensions on very short distances it changes ever so slightly. Freezing the seeing means selecting short enough exposure so that PSF is not changing much in T direction. Anyway - since each of our captured frames will have different PSF for each point - when we stack those frames (after accounting for tilt component / geometric distortion) - each point will actually be affected by average of PSFs over time. This bit is important. Just a moment ago we said that we want to freeze the frame and avoid averaging of PSF - but now we average them - so what's the catch? There are two parts to this - first is correcting for tilt / geometric distortion and second is frame selection. In fact AS!3 is so clever that it can detect which parts of single frame are sharp and which are not (remember - each point, different PSF) and use parts of frames, those that are sharp in stack. you can select if you want quality estimator to work on whole frame (global) or do you want quality estimation around each alignment point (which is default and better option - as it allows for use of parts of sub). In any case - when we discard these two major sources of blur - motion blur of geometric distortion and strong blur of heavy PSF distortion, we are left with only light distortion on Airy disk in our pixels. When these average out we get nice Gaussian type of blur. This is due to mathematical principle called Central limit theorem that states: https://en.wikipedia.org/wiki/Central_limit_theorem or in another words - even if individual PSFs are random in nature - their average will tend to nice gaussian shape, and we know how to sharpen up gaussian type of blur (or other blurs that are nice and symmetric in nature like blur from aperture and ultimately their combination - which is what affects our planetary image in the end).
  12. Just a few tweaks in IrfanView (of all image processing software). I changed gamma to 2.2 (that is same as slightly bumping the curves - most people don't do it but it should be done with raw/linear data), reduced brightness and increased contrast (or rather set black and white point appropriately). Increased saturation to get nice brown / orange for GRS and did one round of stock sharpening in that software.
  13. Processing is a pretty big part of a good image, but so is the data. You have really good data here, even more can be teased out in terms of detail if color management is a bit tweaked: (this is last image tweaked for correct gamma setting for sRGB, contrast / brightness and things like that).
  14. 13ms is too long. There is something called coherence time related to the seeing - it is period in which atmospheric distortion is almost non changing / frozen. Look at this paper: https://articles.adsabs.harvard.edu//full/1996PASP..108..456D/0000456.000.html This is important graph: It shows how "similar" (correlated) are wave fronts at time 0 and time t0 (being - atmospheric coherence time). In another words - if coherence time is 0.5 - and you sample at 10ms - you get 0.1 similarity between start and end wave front - substantially changed But if coherence time is 10ms and you sample at say 5ms - you get correlation loss factor close to 1 (good correlation). And this is interesting part: Atmosphere changes on very short time scales and if you want to avoid motion blur - you really need short exposures - like 5ms or less.
  15. Won't work well I'm afraid. 13ms is too long and would defeat the purpose - but it does show that one needs longer exposure when using F/24.
  16. In order to get same images - here is what you do. You resize smaller image to x1.636363 size or 163.63% and then crop to the central 600px. This will give you both same planet size and same image size (in pixels).
  17. I would not mind doing it if someone shoots the movie in mono with 14" scope at twice critical sampling rate at 2ms exposure length
  18. Ah, ok, I did read that you tried to do it - but I figured out that you left images as is after not being able to make them the same size.
  19. Here is a protocol that I would suggest for anyone wanting to compare two different samplings that will mostly make things equal. It requires mono camera and some special processing. We shoot single video - but we create two new videos out of it to be processed. Video is shot at exposure length suitable for smaller sampling rate but with higher sampling rate (use of barlow). First video is created by binning each frame 2x2 and then taking every 4th frame (we discard other 3). This will give us lower sampling rate at exposure length of lower sampling rate with 1/4 of all frames equally spread over capture period. Second video is created by summing each group of 4 frames to make output frame (stacking without alignment). This will give us higher sampling rate at exposure length of higher sampling rate with 1/4 of all frames equally spread over capture period. We have thus managed to create videos - each at appropriate sampling rate, each with realistic exposure length to reach same signal level, each with the same number of subs in same seeing / same conditions. Drawbacks - recording will have x2 read noise it would usually have and we are limited to x2 difference in sampling rate (we could not do above x1.636363 example). Next part has to deal with same processing. This is tricky as wavelets are applied based on feature size, so they must be set differently. Thus we must create two images to be processed like this: We stack both videos with same stacking parameters (same percentage of subs, same number of alignment points and relative size of alignment points being the same - which means that smaller sampling rate will use half the size alignment point) to get two planetary images. We then create two images for processing by taking smaller planet image, enlarging it x2 and putting it side by side with larger planet to form single image. We similarly take larger planet and reduce its size to half and put it next to small planet. This way we can test processing at both sizes. We can even create third image where we simply put them side by side without resizing and process that. My only concern is that there could be bias in processing and parameters selected so that one of the two looks better.
  20. I'm always surprised that people think that they can produce detail that is not there by over sampling.
  21. Here is what I find interesting in this comparison. First two images are supposed to be F/11 vs F/18, right? This means that Jupiter in one image should have 18/11 = x1.6363.... larger diameter. I've measured following diameters in two images: If smaller Jupiter has 470px diameter - larger should have 470 x 1.63636 .... = ~769px and not 524 Difference between two images is much less than F/11 and F/18. Noise difference is very large indeed, and I wonder if images where manipulated in some other way? If we check the native image, at F/11 it should have sampling rate of 0.153"/px and that would make 46.2" diameter Jupiter be ~301px Maybe drizzle x1.5 was used for that image? It would be much better fit if drizzle x1.5 was used to F/11 and F/18 integrated normally (accounting for small FL change due to primary - secondary distance change when focusing).
  22. If I'm reading this correctly, you used 2ms exposure on ~F/14 and higher gain and 4ms exposure on ~F/26 image?
  23. Can you explain mechanism for this? I understand how all obstructed telescopes soften image all over the FOV - it has to do with MTF of telescope - but I'm failing to see why would center of the field be impacted differently?
  24. Did you use barlow? I experienced once very strange effect when using a barlow. There was mushy / soft spot in the center of the view with sort of "rays" emanating from it - much like those "electricity/lightning" generating balls. It was definitively due to that barlow as I've seen it in different scopes with different eyepieces. To this day I haven't figured out what might have been the cause of it.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.