Jump to content

vlaiv

Members
  • Posts

    13,265
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. First image is over sampled by factor of 2 Second image is over sampled by factor of 2.5 Not sure how this happened, you probably manipulated size of planet in one of both images - since they are of a different size - and that should not happen for same setup (same camera + barlow if barlow distance was constant). In any case - try just camera without barlow, or maybe barlow element real close to give you x1.5 at maximum.
  2. Now I'm completely lost ... I'm guessing you are answering to my post above? Here I'm just stating the fact that just because powermates start at x2 and that people use them on SCTs (and there is no reason not to use them visually for example) - has nothing to do with proper sampling. If you want to sample at optimum rate and you have powermate and SCT, then why not choose camera with larger pixels? I hope this was not perceived as me insulting peoples equipment? Again, I was not speaking to anyone in particular nor putting down people that own SCT nor SCTs for that matter. I simply emphasized that laws of physics are as they are and can't change and if one chooses to use F/20 or F/22 with say 2.9um camera size - well they will be over sampled. Comment about suitable telecentric lens was again to the fact that indeed - there is no telecentric lens that magnifies less than x2 in sale (that I'm aware of). In the end - I simply offered a solution - don't use telecentric lens, use barlow instead. For this application it makes no difference optically (like for visual where telecentric does not extend eye relief or for Ha solar where etalon performance will depend on it) and barlow can be tuned to needed magnification. Alternative for anyone wanting to keep their telecentric lens and get optimum sampling is to change imaging camera. For F/20 for example, I'd use IMX429 mono with 4.5um pixel size.
  3. I'm not following. What does barlow lens and the way it works have to do with anything here? Fact that powermates start at x2 and people using them on SCTs - has nothing to do with proper sampling. You can't expect laws of physics to change because you don't have suitable telecentric lens for your SCT In any case - you don't benefit by using powermate over barlow for planetary - and with barlow, you can control magnification factor by utilizing certain distance from sensor. It is very easy to dial in required barlow amplification to reach wanted F/ratio.
  4. Ok, here is very simple logic that you can follow to understand why it holds true for both small and large scope. 12" scope will resolve double that of 6" scope - as it is twice as big. This means that we need to "zoom in" twice more with large scope to fully exploit this, right? This in turn means that focal length of larger scope needs to be twice as big as focal length of small scope. Say that 150mm (6") scope is using 2250mm FL (F/15) - this means that 300mm (12") scope will need to use double that - 4500mm of focal length to zoom in twice as much. But then we have 300/4500 = F/15 We have same F/ratio. When F/ratio is fixed - "zoom" follows the aperture size (because focal length follows aperture size). This is why we calculate F/ratio base on pixel size for ANY aperture size - as it holds true for both small and large scope. But why do we calculate F/ratio based on pixel size? Well - again, it is simple thing. If image in focal plane is of a certain size in millimeters and we use larger pixels - we will use less pixels to record the image. But given certain amount of detail - there is exact number of pixels that are needed to record that detail, so if pixels are larger - we need to make image in focal plane larger - we change F/ratio (make it bigger) - because it increases focal length and size of image in focal plane.
  5. By the way, look what happens when I reduce size of that Mewlon 300 image to 50% (green channel) and do spectral analysis on it: It is now almost perfectly sampled (actually, now it's just a tiny bit undersampled - I should not have reduced it to 50% - maybe 60% or so was actual optimum size - I don't know what F/ratio was used to produce image so don't know how much it should be reduced - I eyeballed it to 50% of original)
  6. You can see that they are over sampling by size of Jupiter in their images. Just for fun - let's do some calculations to see what sort of size Jupiter can be if properly sampled. Let's take 16" scope (other will be just multiple of aperture size) and say 3.75um pixel size. That requires F/15 to sample properly 406mm at F/15 = ~6100mm of FL. At that FL, 3.75um pixel will be sampling at 3.75 * 206.3 / 6100 = ~0.127"/px Jupiter's largest apparent size is about 50" so when closest and properly sampled by 16" scope - it will give disk size in pixels that is 50/0.127 = ~394px in diameter. 8" scope will produce only half of that so about 200px and 4" - only 100px across.
  7. This is in fact very interesting comparison as we have 12", 14" and 16" scopes and images produced with them. Images are all of the same size - so we can compare who is sampling properly and who is not. Here it is - from left to right, 12", 14" (Damien's EdgeHD) and finally 16" - these are frequency spectrums - log version, maximally stretched. Properly sampled image will have circle touching edges of image - so all frequencies from center to the edge are used. When image is over sampled - frequencies towards the edge will be missing - there won't be signal at those high frequencies and there will be "dark" region towards the edge. We can clearly see following from above spectra: - each larger aperture has a bit more resolving power than the last as each spectra is larger in extent of frequencies - all images are over sampled by at least factor of x2 - Mewlon is best planetary scope / sharpest optics (or weather conditions were the best) - as image could be sharpened the best - most rapid transition from region of signal to region without signal (best defined circle) - other two scopes have softer outer edge of the circle - meaning they were not sharpened as well as they could have been (or maybe noise won't allow it) - signaling either inferior optics or poorer conditions.
  8. Sure, explanation goes like this (just a bit of math fiddling really): On said wiki page you can read about spatial cut off frequency - or maximum spatial frequency possible for telescope of certain aperture. Aperture and focal length are in fact combined in F/ratio - because size of features in focal plane (in units of length - like millimeters or microns) depends on focal length and aperture dictates how much can be resolved in angular units - two combined give max spatial frequency. Formula is given like this: From that formula it is very easy to arrive to this: spatial_wavelength = lambda * F_ratio We just invert frequency from cycles / millimeter to wavelength of millimeters per cycle. Here we use millimeters, but we can use microns or meters as well. Next we combine this with Nyquist sampling theorem that says you need two samples per max spatial wavelength (or sapling at twice the frequency) and we can write following spatial_wavelength = 2 * pixel_size (two pixels in one wavelength or sampling twice per wavelength) Combine two: 2 * pixel_size = lambda * F/ratio and rearrange for F/ratio and get F/ratio = 2 * pixel_size / lambda This is the same formula I gave you above in post explaining how to get proper F/ratio based on pixel size (why 4 or 5 as multiplier for pixel size).
  9. https://en.wikipedia.org/wiki/Spatial_cutoff_frequency
  10. That is a tough one In my view - no sub is bad enough to use - if you use it right They all contain some sort of information - it is only the issue of using that useful information without taking too much garbage with it. Ultimately - that is something people should not concern themselves about, after all - computer is much better at that sort of thing, so computer should decide how to best utilize such sub. I don't think we are quite there yet in terms of software though.
  11. 1) not much, but possibly some 2) probably not, or rather - that depends how much you want to spend. You can always improve on your image by collecting more data and then being selective of what subs you want to include in final stack (like lucky imaging approach - discard subs that are not best) - but thing is, with such approach - time will grow "exponentially" (I'm using quotes because I don't think it is actually exponential growth in math sense - rather it will get progressively slower to get better and better results). Think about it - there are only couple of exceptional nights per year - and if you don't manage to image on those - you'll have to wait for next year (not literally - but you get what I mean) to accumulate some more good subs.
  12. If you are going to use DIY pin hole, then don't bother with calculations as you don't know essential part - that is diameter of artificial star. Put it as far as it is practical for you, but no less than say 30-40 meters for both scopes. There is another way to make artificial star that you can try - it is even less involved than punching tiny hole in aluminum foil. Get very shiny and very small ball bearing. Take torch / flashlight to illuminate it. Make sure flashlight is one with narrow beam and place it few meters away from ball bearing. Setup should look something like this: left is OTA and torch is shining at some angle at ball bearing. Distances are not to scale in above image. Place OTA at least 30-40 meters away and torch few meters away from ball bearing. In this setup - specular highlight reflected of ball bearing is your artificial star, like this: Size of it depends on size of light source causing it - that is why you want your flashlight to be far away enough so it causes only a tiny shiny dot.
  13. https://www.starnetastro.com/download/
  14. Yep, just two small issues - a bit out of focus and core of M31 slightly clipped, otherwise, excellent data.
  15. Yep, it is a bit out of focus. Excellent data by the way - it goes deep.
  16. It is interesting that it only shows on one side of image. Stars on left side, although bright enough - don't show this effect. This usually points to cause being closer (to focal plane) than actual aperture iris. When iris is the cause - stars are affected all over the FOV - much like spikes from newtonian. When whole field is affected - that means cause is in collimated beam. When only part of image is affected - cause is in converging beam. Something similar happens when OAG prism protrudes on one side of the FOV - then only stars on that side of FOV get single spike due to this. My guess is that it is actually a hair or something similar - on the back side of the lens - on one side of it.
  17. Ok, check out this blog post: https://rk.edu.pl/en/planetary-maps-and-de-rotation-winjupos/ It outlines both use cases - align images with RGB example, but it also talks about derotating whole videos.
  18. If for example you have 3 videos, each 3 minute long over say 11 minutes total. Each video being certain color. Stacking will handle any rotation between first and last frame of first video. It will do so for second video and it will do that for third video. This means that each video will be stacked without motion blur from rotation, but when you try to compose RGB image out of it - these images will represent different times - there will be at least 8 minutes of difference between first and last stack (if we assume that reference frame represents midpoint in video and if we have 11 minutes of total time - half of first video is 1.5 minutes, half of last video is 1.5 minutes so difference is 11 - 3 = 8 minutes between these two time points). In 8 minutes features in center will move about 4 pixels according to above calculation for 8" aperture. Edges will still align as features drift progressively slower as you move towards the edge of disk This means that in center you will have misalignment of R, G and B data - but not on the edges, so simple "align RGB" won't help there - you need to derotate actual stacks / channel images to align them properly. This is example of why you would want to derotate stack of 3 minute video (but not video itself). Another example would be if you for example star your capture and capture 3 minutes of video but cloud rolls in and blocks the view. It passes after 5 minutes and you then make another 3 minute video, and you get another interruption for whatever reason (maybe your storage can only handle writing of 3 minute videos). In any case - now you have short recordings, each of which can be stacked on its own without issues - but you want to combine resulting images by stacking them. In order to stack them, you need to derotate them for the same reason as with RGB - their centers won't align but edges will. Now if you've captured 15 minutes of video in single run - you have two options. Either chop up such video into smaller fragments, stack each fragment and then derotate each of those for final stack of stacks. Alternative is to simply derotate whole video and stack it in a single go without worrying that stacking software won't be able to pick up large difference between first and last frame (or some of first and some of last frames depending what is left after rejection based on quality).
  19. There are two modes that you can use with WinJupos. First is derotation of single images. This is useful feature in several cases. One of them being stacking of several stack that are taken over period of time - like you say. Other useful use case is to do LRGB or RGB imaging with mono camera and filters. You record for example 3 minute video with each R, G and B. Now your recording spans up to 10 minutes or so (9 minutes of footage and filter swap / refocus in between). In order to align colors properly - two of 3 colors need to be derotated to match one base color. Then there is derotation of whole video where individual frames are derotated prior to stacking - much in the same way as above still images are - depending on their time stamp (which determines how much derotation to apply based on how much time has passed from reference moment to the moment of image/sub). That is useful if one records long video - but does not want to slice it up in pieces to create individual stacks and images to be rotated and then again stacked - but wants to be able to stack whole video at once. This can/should be done with videos that are longer than said 3-4 minutes.
  20. I think you are going at it from "wrong side". If your lens is 50mm of FL and it is F/1.4 lens - that means it has 50mm / 1.4 = 35.7mm of aperture. If it actually had 77mm of aperture as front lens suggests, then it would be 77/50 = F/0.65 lens. Ok, so here we have two "confusing" things here. First is F/ratio notation where ratio is being used as number dividing (below fractional mark not above it) - which can cause confusion. 200mm / 1200mm scope is F/6 instrument, right? Note position of numbers 200 / 1200 = 1/6 = F/6 instrument This can be also read as "focal length is x6 the aperture". If focal length is 50mm and aperture is 77mm, then above sentence would read "50mm focal length is ~ x0.65 the aperture of 77mm" - so actual F/number in that case would be F/0.65 Second thing that causes confusion is - how can lens that has such large opening operate at effective 35.7mm of aperture? Lens design is complicated and it has many elements (like 8 or 9). Mechanical aperture / iris is inside the lens and it acts as aperture stop. It is never seen in photos even when stopped down because it is effectively "at infinity" optically speaking. Much like secondary of certain telescope designs. It sits in collimated light beam and acts as aperture stop. But why put large front lens then? It has to do with correction of different optical aberrations. First - lens must have a flat field. Shorter the focal length, stronger the curvature, yet lens have flat field to at least 35mm diameter. It is also rather wide field (with that short FL) instrument - so it needs to be able to gather light from all directions and then focus them on sensor. You will notice that shorter the FL - more bulged and curved front lens is because of that. That is 12mm lens that is F/2.8. From the image it would look like F/~0.2 lens with around ~50mm front opening - but it is of course not. In any case - not all of the front lens is used for all incoming light. Depending on incident angle of light - it will use just certain part of the front lens (one "facing" that direction") - to minimize problems that arise from glass being too curved with respect to incident light rays. In the end - things that block light when light is collimated beam do not cause vignetting or shadows. Only things that block light when it is converging do that and more light is "concentrated" - or closer to focal plane - more concentrated the shadow is. Vignetting happens when converging beam is somehow partially blocked - like too narrow connection or maybe filters of small diameter where cell is blocking the light - or too small secondary.
  21. Yes, you are properly sampled and nothing to be gained by drizzle.
  22. I'm aware of what astronomy tools gives, and I think it is completely wrong. I started a thread with explanations as to why it is wrong and giving much more accurate account of the whole thing (with background info and sources) but nothing was really done to remedy that and apparently many members disagree with me (or facts, not sure how to put it): I was not referring to image itself, but color information in image - it does look better, or rather more accurate. Non drizzled version seems to have teal tone to it, with reddish dust lanes and reddish/white core. Drizzled version looks much more realistic in terms of color - central part of galaxy mainly contains yellow stars and should be therefore yellow. There is a bit more of hot young stars in spiral arms that are light blue. Dust lanes are brown. All match what is expected in such galaxy.
  23. That is in pixels right? You are then right where you want to be (maybe a bit over sampled actually ) - if your FWHM is 1.6px - then you are properly sampled. If it is less than that, you are under sampled - higher than that and you are starting to get into over sampled territory. Don't bother with drizzle, but do see about that color. It does look better in drizzled version (not sure what the reason is).
  24. I would not trust it. What is FWHM of your stack (in either pixels or arc seconds)?
  25. Here it is: Second post by @neil phillips shows that there is absolutely no difference between derotated and non derotated stacked video in 3 minute capture
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.