Jump to content

vlaiv

Members
  • Posts

    13,265
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. Not sure I follow? De rotation is still needed if long video is captured or if one wants to combine images produced with several short videos (stack them). It is done to remove difference in feature position, but removing difference in feature position is what stacking software does by default when using alignment points, so for short videos up to 3-4 minutes, there is no need for derotation to be done. Yesterday I saw a post by someone showing 3 minute derotated video vs regular AS!3 stack of that video without derotation - and surely enough, there is no difference between the two. I'll try to find it and will post the link
  2. Actual calculation is very straight forward and very unambiguous and it goes like this: F/ratio = pixel_size * 2 / wavelength_of_light and holds true for monochromatic light. Resolution of telescope depends on wavelength of light. It is highest in blue region (shortest wavelengths) and lowest in red region (longest wavelengths). When you image in color - you have a choice. You can select what wavelength to use as your baseline. If you want to be absolutely certain you caught all there is - then you need to use 400nm (or 380nm - depending on your UV/IR cut filter, officially human vision starts at 380nm or even a bit lower, but most UV/IR cut filters cover 400-700nm range). In that case formula is as follows: F/ratio = pixel_size * 2 / 0.4 = pixel_size * 5 I personally recommend people to go with 500nm as wavelength even for color imaging. There are few reasons for this. First - that is peak of luminance. second, most refractors are best corrected around 500nm line (because of peak luminance sensitivity is there). and third - atmosphere bends different wavelengths of light by different amount. This is law of refraction where angle depends on wavelength (that is why we have rainbow). Blue light is bent the most (short wavelengths) and red (long wavelengths) is bent the least. This means that seeing will affect blue light or rather wavelengths around 400nm the most and that detail will be most difficult to recover so you don't need to go all crazy with resolution and try to capture every last bit of something that is going to be blurred anyway. Remember - more you increase F/ratio, more you spread the light and less signal and SNR you have. That is why I recommend 500nm as wavelength in above calculation for RGB imaging (for narrow band - one should use actual frequency, like when using Ha or OIII filter to minimize seeing when imaging the Moon). If you put 500nm in equation then you get F/ratio = pixel size * 2 / 0.5 = pixel_size *4 Both are correct, and if you are pedantic about it - pixel_size * 5 is "more correct", but you'll see almost no difference in captured detail between pixel_size * 5 and pixel_size * 4, and pixel_size * 4 will give you better SNR (which lets you sharpen more and bring out detail that you did manage to capture in the first place). By the way - this holds true for any type of scope 8" newtonian / 4" refractor or C14. It has to do with size of aperture of scope and physics of light (wave nature of electro magnetic radiation).
  3. It is fairly simple calculation really - trick is to just imagine single point on Jupiter's equator. 440000Km is circumference and Jupiter makes one revolution in ~10 hours, so any point on equator will travel at 440000Km / 10h so we get 12.22Km/s when we do the math. From our perspective in one second - that point will move 12.22km - imagine tangent to Jupiter's equator. When diameter is large compared to motion - it is like point is moving perpendicular to us. Here is little diagram (not to scale because of distances and sizes involved): Arrow is how much point will move in one second. This is correct only if movement is very small compared to diameter / circumference - otherwise curving needs to be taken into account - but 12Km is very small compared to size of Jupiter so distance along straight line and along surface are almost the same. It is then just simple trigonometry: tan(angle) = 12.222...Km / distance to Jupiter => angle = arctan(12.222.. / 591,000,000) (just remember to convert to degrees if your calculator is using radians - standard for trig functions is radians if not otherwise indicated) By the way - if you search google for "arctan(12.22 / 591,000,000)" it will give you right answer in radians - and even better do search for arc seconds : That is just brilliant By the way - all the numbers come from google - I searched for circumference of Jupiter, rotation speed and current distance to earth (and I rounded results)
  4. Don't confuse theory being right or wrong with your lack of understanding of it or its domain of application. If you find that theory is "wrong", it can be one of few things: - you are applying the theory in the wrong way (lack proper understanding of it) - you are applying the theory outside of its domain of application (you are applying theory correctly but to the wrong case or maybe wrong initial conditions) - you found genuine evidence that theory is wrong. That is major thing and cause for "celebration" - you have made genuine advancement in science, but that does not happen very often. When this happens you should definitively publish a paper about it. All above is that I've written is easily verifiable / testable When stacking, only noise contribution that depends on number of stacked subs rather than total imaging time is read noise. If one wants to use shorter subs - one will benefit from lower read noise, as total SNR will be better with lower read noise camera. If we had camera with zero read noise - it would not matter if we took 10000 x 10ms or 20000 x 5ms or 100000 x 1ms exposures as far as SNR goes (if we stack the same percentage of each frames). QE is rather self explanatory in terms of benefit - more signal - better SNR. Again - something easily testable. Using high FPS camera is just common sense really, nothing to test there. Higher FPS - more frames captured, with same percentage of good frames - more frames to stack, better SNR.
  5. No, I was referring to the focal length. I was not clear about it. You are quite right in your calculation - it is ratio of aperture size and focal length - that is why I used 56 and F/2.4 in my example F/ratio = aperture / focal length 1/2.4 = X / 56 56 = X * 2.4 X = 56 / 2.4 = 23.333mmm Simple as that, but yes, you can use "stops" to help in calculation. Yes, you can "chain" step down rings until you hit desired aperture (does not need to be exactly F/2.4, I think F/2.53 is fine as well) or if you have 3d printer or know someone that has one - you end up with something like this: I haven't had luck with electronic lenses in that regard. On all mine - focusing ring is actually just electronic control/sensor for focusing motor. No power - no focusing. For that reason I just look at fully manual lens to use with astronomy cameras.
  6. Collimation check and star test are performed in the same way, but what you see is "read" for different things. Say you have in/out focus images of a star looking like this: For collimation - you want these rings to be concentric. Remember - you want to place star in dead center of high power eyepiece. When you adjust primary mirror - star image will shift and you need to move scope to bring it back in center. If you have issues with collimation - image will look like this: However, if you want to check for spherical aberration - you don't look at ring geometrical pattern at all. You look at something else entirely. You look for this sort of pattern: Inside and outside focus images being different in where brightness is concentrated. One will have brightness concentrated in center while other will in outer rings. In fact - here is a "cheat sheet" for that: Actual star image will be some combination of above - with different "terms" having different level of contribution (some will be there and some maybe won't - in perfect optics, well all are almost zero). For collimation - you just want second set of images, or rather you want your star not to look like that - to have offset in rings, but you want rings to be concentric. When you place artificial star far enough not to be resolved but still close to create spherical aberration - then brightness of in and out of focus images will be different even if you have perfect scope - because of artificial star being close and introducing spherical aberration to wavefront that has nothing to do with optics - but that won't mess concentricity of your rings and if they are concentric - you have good collimation.
  7. Just to make answer complete, here is how you calculate distance at which you won't resolve artificial star. Use this to get airy disk diameter: https://www.wilmslowastro.com/software/formulae.htm#Airy So diameter of airy disk is ~1". We can then take 1/2 or even 1/3 of that size and that is needed angular size of our artificial star. Say you have 50um artificial star. At what distance does it need to be in order to present itself as 0.5" wide? Here you use another calculator: https://www.1728.org/angsize.htm You solve for distance and use arc seconds as angular measure. Convert all distances in either meters or millimeters. 50um is 0.05mm Answer is ~20626mm, or 20.6 meters away. Smaller scopes have larger airy disks and that means that you can put artificial star closer (just mind out focus). Btw, close focus introduces spherical aberration for optics corrected for infinity. If you want to star test telescope with artificial star, then things get much more complicated :D. You need to calculate distance at which added spherical is minimal, or if you do wavefront analysis - you can subtract spherical due to close focus from Zernike polynomials.
  8. Any lens that has variable aperture iris (and that is 99.99% lens out there) will give you diffraction spikes. There is very simple way to avoid this however. It is simply DIY project involving some cardboard (or something more fancy - like 3d print). Lens have filter threads, and there are step down rings, or even very cheap filter for given thread can be used. We want just metal part that screws in thread and not optics of filter. We then take piece of cardboard and make aperture mask of wanted aperture. We make it circular. Say we have 56mm lens and we want to use it at F/2.4 (we concluded that F/2.4 gives us good sharpness for our sensor). We then make cardboard piece with hole in the center that is 56/2.4 = 23.3333mm in diameter. We glue or otherwise secure cardboard in that filter ring so we can screw on our fixed circular iris whenever we want to shoot at F/2.4 We can make several of these for different F/ratios and just use one that is wanted for given application (narrowband might need slower aperture due to NB filters and so on). Just a final note - we open up actual iris on the lens all the way so it does not interfere with our new iris.
  9. Distance for focus does not depend on size of artificial star. You can focus at any distance, provided that you have enough out focus travel. Here is simple formula to get you started: 1/focal_length = 1/star_distance + 1/new_focus_position Focal length of your scope is ~1200mm, and say you want to focus at artificial star that is 10 meters away, so 10000mm. What is out focus (over normal focus at infinity) that is needed to focus? 1/1200 = 1/10000 + 1/new_focus_position 1/new_focus_position = 1/1200 - 1/10000 1/new_focus_position = (8.33333 - 1)/10000 = 7.33333/10000 new_focus_position = 10000/7.333 = ~1363.6mm So you need to move from 1200 to 1363 - focuser needs to be able to move additional 163mm from normal focus position at infinity. You would probably need 15cm extension to reach focus at 10 meters. If you want to know the distance at which you won't resolve artificial star - that is another matter altogether. You need to calculate airy disk size of your aperture, and see at what distance artificial star (here size does matter) subtends half of airy disk size.
  10. Well, actually no. One can go up to 3-4 minutes with given setup. I'll explain. Jupiter is ~440000Km in diameter and rotates once every ~10 hours. This means that point on equator closest to us moves at ~12.22Km/s. Given that current distance to Jupiter is ~591,000,000Km, this motion is 0.0042656"/s. In 30s this point will travel ~0.12". Given above setup at optimal sampling rate at ~0.258"/px, so fastest moving point will only move half a pixel in 30 seconds. If everything was perfect, then yes, 30s would be sensible limit to prevent motion blur, however, we have influence of atmosphere, and due to seeing different parts of the image "jump around" for more than fraction of arc second. If seeing is say 1.5" on a given night - that really means that distribution of point position has FWHM of 1.5" or standard deviation of 0.637". That is your average deviation of point position from its true position over course of few seconds. On average, seeing creates x5 larger motion of points on the image from frame to frame than rotation makes in 30 seconds. Look at this gif from wiki from seeing article: How much motion/distortion there is from frame to frame. Stacking software knows how to deal with this - it uses alignment points and creates "counter distortion" (it can't undo blur, but can create opposite distortion, based on feature average position over time - that is how reference frame is created, alignment point deviations are averaged over period of time and that is taken to be reference position for that alignment point). It can correct for feature being out of place for several pixels (alignment point of 25px is often used so max displacement is 12px, but in reality it is more like 7-8px max). Given this, software alignment of feature on order of 3-4px is not a problem and is easily handled by stacking software, so from above calculations we can see that there is really no need to derotate video for up to 3-4 minutes as stacking can handle any rotation.
  11. RC is not the best tool for this as it's got massive central obstruction, so any result will be worse than is possible with given aperture size. Other than that - you are using way too much focal length for your camera. Ideal F/ratio depends on pixel size and since you are using 2.9um camera, you really need to be at F/11.6, or to round that up F/12. With current setup you are at twice that (some people use it like that, but you need excellent seeing so you can use longer exposures). Next step to check is exposure length. It needs to be really short. You want to freeze the seeing, so limit yourself to 5-6ms exposure length in most cases. In fact, check these two recent threads:
  12. When you stack - do several stack at once (you can do that by entering multiple percentages in AS!3) - like top 5%, 10%, 20%, 40% frames - just to see the difference that makes.
  13. It might be worth experimenting with recording options - SSD attached to USB3.0 port, or maybe SSD backed NAS on a gigabit network? SD card storage is painfully slow.
  14. It is quite ok solution for deep sky imaging where you don't have nearly as much requirements for data transfer speeds, but not sure if it is good solution for planetary. Maybe you'll need some sort of laptop for planetary role.
  15. What computer are you using? Is that RPI with astroberry or something more powerful?
  16. That might be limiting factor. Real benefit of modern planetary cameras is ability to do very high FPS, but in order to do that several things must be optimized. Software needs to be written with planetary imaging in mind, computer must be able to handle full usb 3.0 speeds and storage should be able to record data at those transfer rates. You want to record at at least 150-200fps or higher if possible. If you record at 150fps for say 200s (a bit more than 3 minutes) - that will result in 30000 subs. I'm not sure what planetary capture software is compatible with astroberry, so that is worth looking up. Oh, here we go - this is from astroberry website: oaCapture for planetary imaging FireCapture for planetary imaging Maybe look up one of those two for planetary imaging?
  17. Which part confuses you? Here is QE graph of ASI462: From it - it is obvious that peak QE is around 820nm and that such camera is useful for IR imaging. It also shows that QE is lower in visible part of spectrum. Here is published graph of ASI662: It is absolute QE graph and peak QE is over 90% which is also quoted by ZWO. Similarly here is graph for ASI678: Blue is strongest with peak at 83% Rest of specs are also available, and I personally don't doubt them (much) as I have tested several ASI cameras and specs (mostly) match those published (read noise is reported a bit optimistically low in my opinion - but not by much).
  18. No, it's not your fault. It's mine. I keep insisting on some things that I should probably let go ... In any case, I think my advice is very sound. I'll explain details briefly (all verifiable things). - if you increase "magnification" while capturing (by using barlows), for same exposure length, signal strength goes down. This is well known fact and can be even tested visually - increase magnification - image will become dimmer. This is because light spreads over larger surface so there is less photons per unit surface. - Lower signal = less SNR. This is fairly obvious since SNR is signal/noise and if we decrease signal we decrease ratio of it and noise - Telescope can resolve only so much and that depends on aperture size. That all adds up to very sound conclusion that you should not make image at capture time larger than it needs to be (than telescope can resolve) - because your subs will either have lower SNR - which makes it harder for software to determine sharpness and stack properly and impacts overall final SNR result which prevents you to sharpen image as much as you would like, or it forces you to stack more subs than you should (lower quality subs) or it forces you to use longer exposures to recover signal - which is also not good in most circumstances because we want to freeze the seeing. In most cases this happens around 5ms. If atmosphere is really stable - this time can be longer. Some planetary imagers enjoy very good and stable skies and this allows them to use longer exposures without having seeing induced motion blur. This in turn lets them capture more zoomed in image without as much ill effects mentioned above, however that does not mean they are going to capture additional detail as one can't defy the laws of physics. What you do after capture really does not matter, so if you decide to drizzle or scale image up - that is fine. Just be aware that you can't add detail that way (that would be like enlarging image and expecting it to remain as sharp - that does not happen).
  19. What do you use for capture? Best to use SharpCap or FireCapture for planetary.
  20. 30 seconds is way short capture. If you stacked only 30 frames and got that much SNR - it points to your subs being too long. You'll have much more difficulty sharpening the image if you let atmosphere "dance" during single exposure and create motion blur. What you want to do is have short enough exposure to freeze any motion of atmosphere. This will help software select best frames as well as it will be much clearer where seeing is good and where it's not. Use exposure of 5ms (regardless of what preview looks like - if it is too dim or if histogram does not look good) and shoot 3-4 minute video at smaller ROI - like 640x580 to achieve higher FPS. You want to end up with at least 20000 frames in your ser. Then select best few percent and stack those (best 2% will give you 400 subs to stack for example).
  21. But of course, I was just trying to point out that I wasn't either trying to start an argument on image size. I mentioned you and used your images because OP first referenced someone with F/6.3 scope that makes excellent large images, so I figured that it must be you. I do appreciate that some people see the difference between above two images. I personally don't. In fact I don't see the difference even if I enlarge reduced image to be comparable to original and them mathematically subtract the two. Difference between them is just small level of noise nothing else.
  22. My post was not meant to provoke an argument on image size. I fully understand your preference for larger scale image. It is easier for you to view it like that and probably to process as well. I was just trying to point out that physics says that no additional detail will be captured after certain image size, and that it is sensible not to capture larger image than that. One can certainly choose to capture larger image (or make it larger in software - by use of drizzle or resampling) for other reasons / convenience in either processing or viewing, but capturing larger image by use of barlow does reduce quality of capture in terms of SNR given that there is limited time window in which capture is done and we can't arbitrarily boost SNR by throwing more imaging time at it, and one should be aware of that when reaching informed decision on image scale they want to use.
  23. Very nice data and some very nice renditions. Here is my quick process:
  24. Main way to get good images is to understand how planetary imaging works. Pixel size is not important at all. You can get the same size image as those that you've seen from Neil if you use barlow and some tricks in processing (like increasing size with drizzle factor) - but those result in empty resolution. Some people simply choose to make larger image as it is easier for them to process or look at - but reality is, telescope can't resolve image at that scale (or rather - every telescope has limit of what it can resolve) Let me show you by using one of the images @neil phillips produced: And the same image that has been reduced x2 in size: Now question is - can you see a feature in above image that you can't see in smaller image? Bottom image looks properly sharp, and top image just looks like scaled up version of bottom image without additional detail. I'm just writing all of this to show that it is not the size of image that determines level of captured detail. It is aperture of the telescope that is limiting factor. If you wish - you can certainly do as Neil does - and use barlow and drizzling to produce zoomed in version of the image. However that won't produce sharpness that is expected on that scale. Back to planetary imaging - here are key tips: - yes, get better camera with higher QE, faster frame rates and less read noise. That will help - choose barlow to match pixel size (or larger if you want to get larger image, but in my view that hampers your results). For OSC / color imaging, best F/ratio for given pixel size is pixel size * 4 - so for 3.75um pixel size that is F/15, for 2.9um pixel size that is F/11.6 and so on. - keep exposure length short, like 5ms. Don't use histogram to set exposure length so that image is "bright enough". Stacking will make sure you get image that can be made bright enough, individual frames don't need to be. Point of short exposure is to freeze the seeing so that you don't add more blur to your image. - get plenty of frames - few tens of thousands. This is why fast camera helps. Use ROI to limit amount of data you record (640x480 is enough for planets). Use USB3.0 computer that is capable of recording that amount of data in real time (use SSD for storage). - Optimize your imaging environment in the same way planetary observers optimize their. Let the scope be cooled down. Be patient and wait for periods of best seeing (check seeing forecast so you know what to expect, or learn to judge seeing by eye). Don't shoot over large bodies that accumulate heat over day and radiate heat during night - meaning paved roads, large bodies of water, houses that use heating during night. Wait for planet to be sufficiently high in the sky. - When stacking select only few best percent of frames for stacking (use AS!3) - rest is down to careful processing / wavelets / deconvolution / etc - that is simply something you practice
  25. I just love these discussions over such long periods - maybe I'll have to wait a few years for any sort of reply/feedback Here is simple tutorial how to do such background removal in gimp: Open original image / convert it into 32bit float per channel: Next use Filters / G'Mic Qt Select details: and under details - select wavelet decomposition: Select number of layers so that last layer is nice background (say 7 in this case) This will create 7 layers: Last one is called residual - make a copy of that layer Select all layers except that copy and merge them back to single layer (use merge visible layer by setting all visible except copy - set it to not visible, move residual on top as well): Finally, set mode of that residual to subtract: Here is result: This will result in some clipping of the black - in order to avoid that, increase brightness of original layer or decrease brightness of residual layer by tiny bit. Here is image and histogram after increasing brightness a bit:
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.