Jump to content

vlaiv

Members
  • Posts

    13,265
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. Yes, x1.5 barlow is just the right. F/ratio should be x5 (at max) of pixel size - so in your case that is 3.75 x 5 = F/18.75 Since scope is F/12 if you add x1.5 barlow you'll get F/18 - which is just right. Thing is - you should verify if scope is actually operating at F/18, because both barlow and mak can change that. Magnification of barlow element depends on distance of element and sensor - more distance - more magnification it will give you. Maksutov scopes focus by shifting primary mirror - and actual focal length depends on distance between primary and secondary mirror. This means that as you focus - you also change focal length of instrument (and hence F/ratio). This is by small factor but it still happens. With camera this is noticeable as we usually remove 1.25" diagonal when we attach camera and that is often as much as 70-80mm of back focus difference. In any case, here is calculation for effective F/ratio in this image Diameter of Jupiter in the image is ~222px in the image. Currently, apparent diameter of Jupiter is ~48.7 arc seconds - and that makes it ~0.21937"/px Given that your camera has 3.75um pixel size - focal length is 0.21937 = 3.75 * 206.3 / FL => FL = 3.75 * 206.3 / 0.21937 = ~3527mm F/ratio is then 3527 / 150 = F/23.5 Even in this image that is not resized - it seems that magnification of barlow is too high. It is operating closer to x2 than to x1.5. You are over sampled by factor of 23.5 / 18.75 = ~1.25 In another words - if we do FFT - "circle" of signal should end up somewhere around 80% towards the edge. Let's check that. Here it is a bit harder to see - as there is seeing influence present that restrict most of the data to central part - but there is "secondary" ring of data. Let me do profile plot to see if we can figure it out: That still does not help much - I marked where telescope optics is - I think that due to seeing data probably ends a bit earlier - so in this instance seeing prevented you to get max resolution of telescope and actual detail effectively stops between 300 and 400 mark in the graph. In any case - if you want to hit optimum sampling - you should adjust barlow / sensor distance to get x1.5 magnification. You can do this during daytime - aim scope at distant target that you can easily measure - like apartment building or bridge - something with clean lines. Take image without barlow - and measure some distance. Then add barlow but put variable spacer between barlow and sensor - and take images while changing the distance - each time measure same feature until you get it to be x1.5 longer than in baseline image.
  2. Ok, so my first impression is that you were using way too long exposures for that capture. @Sinemetu63 sorry we are having this off topic discussion on your lovely Jupiter image post (I at least hope it will be beneficial to people). You are probably looking at histogram and using brightness of the view to determine exposure length. That is wrong approach. Exposure length should be set to correspond to coherence time. We can't know that as it is variable that depends on local conditions at the time of recording - but it is usually 5-6ms for aperture sizes amateurs use. You should set exposure length to 5ms even if image looks too dark (you can go to 10ms if seeing is exceptional and there is almost no movement in atmosphere). If I do analysis - two things are apparent: First is "artificial" enlargement of image by drizzle. Any time we use rescaling of the image there is different signature - which is square. Aperture leaves round signature in frequency data. Other thing we see is that signal is constrained in very center - which means that data is over sampled by a large margin. For ASI 533 pixel size is almost the same as ASI385 - so optimum F/ratio is the same F/18.6 - which means 150 * 18.6 = ~2800mm If you used x2 barlow and then drizzled x1.5 - you effectively raised F/12 to F/12 * 2 * 1.5 = F/36 That is x2 more than you need. From above FFT - things are even worse as there is hardly any signal past initial 20%, so image is over sampled by factor of x5. Part of this is due to over sampling with respect to aperture size and the rest is simply due to seeing (seeing lowers max theoretical resolving capability of scope further) - and in particularly longer exposures.
  3. Sure. I'd be happy to do that, and even explain the tools - it is just ImageJ which is open source scientific image analysis tool (made for microscopy, but works well for astronomy - there is even astronomy version called AstroImageJ that has photometric measurement and plate solving added - but I prefer to use plain version loaded with plugins for this sort of thing). I can explain things in detail if you wish. It is basically fourier transform to examine image in frequency domain. Atmosphere and telescope act on image in similar way as equalizer acts on sound. There are different frequency components to the image - similarly as there is to sound. We know difference between low and high frequency sound by feel (or hearing rather), but in image it is coarse and fine detail of sorts (low frequency is coarse detail and high frequency is fine detail). Blurring removes or rather attenuates that high frequency detail in the image. If you've ever seen MTF of telescope - which looks like this: That is exactly like equalizer above - as we move from left to right we move from lower to higher detail. This graph shows how much attenuated or muted are frequencies or detail. To the left there is almost no attenuation - but as we move to the right - finer detail is muted more and more - until we reach 0. This is resolving power of telescope / aperture - point after which any smaller / finer detail is simply completely blocked. Ideal sampling rate is when this curve hits zero at the edge of image in frequency domain. That is related to Nyquist sampling theorem and corresponds to two pixels per wavelength / cycle. In 2d image - above graph is actually a cone centered in the center of the image and slowly falling towards the edges. Here is surface plot of FFT of that properly sampled jupiter image: You can see similar shape that is high in center and falls towards the edges. You want that falloff to hit the edge for properly sampled image. Regarding the sampling - here is what can happen: Top diagram shows over sampling. In each diagram we capture all the data between black and red vertical lines (black is just origin and red is highest frequency that we capture depending on our sampling frequency - pixel size with respect to "magnification"). In first case of over sampling we capture all the data but we also capture "empty" part where there is no signal. On its own - that is not a problem. Problem here lies in the fact that in order to over sample - we must use too much "zoom" and that spreads light over more pixels than necessary thus reducing signal per pixel and overall SNR (which is needed for sharpening stage and so on). This also means that we need longer exposures to get good SNR and we always try to get the lowest possible exposure length when doing planetary to freeze the seeing. Second case is proper sampling - we capture all there is - no more, no less. Perfect case. Third case is under sampling. We simply clip some of the data that is there. Under sampling does not have many drawbacks except that we loose some of the detail as we did not "zoom in" enough to capture it. There is also matter of aliasing, but that is separate topic and not something that one should worry about as high frequencies that alias are already very weak. Just to add - black line is center of FFT image and red line is edge of FFT image in 2d case. Hope this somewhat explains what I've done and things behind all of this.
  4. Almost there How about those missing stars? Check this out as an example: First version (first post - version 1) at the same scale: I really like when even tiniest of the stars are nicely rendered. When stars are "eaten up" like that in processing - to me, it signals "forcing" of the processing to some extent.
  5. Well, yes there is, even down scaling helps. Only difference between binning at linear stage and binning when processed or down scaling it - is level of SNR improvement. Binning at linear stage has precisely defined SNR improvement (much like stacking) and it is equal to square root of binned pixels. Binning 2x2 averages / adds 4 adjacent pixels (group of 2x2) - and thus improves SNR by factor of sqrt(4) = 2. This is same as stacking 4 subs - SNR improvement is x2. When you bin processed data - things change. You no longer can expect SNR improvement like at linear stage. It will be less - but no way of knowing exactly how much smaller it will be. That depends on level of processing. If you already applied some sort of noise reduction - then binning won't improve SNR much as noise is already reduced. Down sampling, similarly improves SNR less - unless it is very "naive" type of down sampling which trades even more sharpness / resolution for noise reduction. Ideal down sampling should really preserve SNR, but there is only one case that does that. That is nearest neighbor resampling when you decrease size by integer factor (for example x2). Then SNR is preserved (all characteristics of the image except sampling rate are preserved) - because we don't alter the data in any way except "dropping" some samples (we for example just keep every other pixel if we reduce size by factor of x2 - but we don't change any values). However, when you don't do it by integer number - you need to utilize some sort of interpolation function to calculate samples at non integer places ("between pixels"). These interpolation functions do some smoothing of the data - and for down sampling that is actually a bad thing. That smoothing improves SNR. In another words - SNR improvement is actually bad side effect of down sampling and better the resampling algorithm - less SNR improvement there will be.
  6. I think that it can be seen Here is 100% of third image in first post: Grain can be easily seen Here is 100% zoom from second image: Here we can see artifacts from noise reduction (small "bubbles" in background instead of noise) Here is latest version - background Stars are more pin point and to my eye background looks the best - it looks natural enough (there is very fine grain just at the level of detection - which is in my view ok for image - it shows image is "natural" and it does not detract from view). And direct comparison from low brightness part of target - image 3 and latest (image 4):
  7. I like stars in first one and color scheme and overall look in second one. All three must be binned x2 as they are way too zoomed in at 100% (over sampled) - that will help with noise and you won't need that much noise control (which is also visible with background artifacts in images - worse are 2 and 3 in this regard).
  8. Great image. You should probably keep barlow within the specs to give you x1.5 as this is over sampled. Frequency analysis says that you are x1.5 over sampled. From Jupiter disk size - image as is resolves to ~4200mm (or ~3500 if you resized it to 120%), But actual focal length should be ~x1.5 lower than this - or 2800. By the way - this is first time I've seen someone capture so very close to theoretical limit of resolution. Theoretical limit of resolution is where F/ratio is x5 pixel size - in this case 3.75um pixel size of ASI385 will be F/18.75 or FL of 2812mm. Image has data that corresponds to about 2800mm of FL - so you manage to capture all there is really. Here is Fourier transform / spectrum of image: You can clearly see that signal is concentrated in inner 66% : Rest is just noise. If we remove outer part of FFT like this: And do inverse FFT - we get the same image: (minus some noise) By the way - properly sampled image looks like this: Very sharp and detailed when viewed at 100% (at least on my computer screen)
  9. Yes, this is known as thin lens formula or thin lens equation and goes like this: 1/f = 1/od + 1/id Where - F is focal length of lens (or telescope in this case) od - is object distance - in this case how far you want to focus to id - is image distance - or where focal plane will be with respect to lens. Say you want to focus to 15 meters away with 1.2m focal length 1/1.2 = 1/15 + 1/x 1/x = 1/1.2 - 1/15 = (12.5 - 1) / 15 = 11.5 / 15 x is therefore 15/11.5 = ~1.30435 meters or 1304mm You already have 1200mm of focal length when you focus at infinity - this leaves additional 104mm to be "added" to this. You need to rack your focuser out additional 10.4 cm from infinity focus position (which really means to add 10cm extension tube)
  10. Alternative is to get 2.9um camera instead of 678 and use APM barlow at ~x2.7
  11. I think that you should probably consider different barlow if you'll shoot for F/10. According to Bill Paolini - it works good down to x2.2 https://www.cloudynights.com/topic/542326-apm-coma-correcting-barlow/#entry7310653 That is magnification you'd need to get x5 pixel size. Btw, if you go for x2.2 barlow and F/10 - then you'll have about 2mm radius of diffraction limited field (4mm diagonal). That is 1600px by whatever is offered as height for that width as ROI (since diagonal is ~8.8mm - you want x2.2 smaller ROI to get diagonal to be ~4mm).
  12. Hope you don't mind me giving a critique - I think that outer reaches of galaxy are still somewhat "flat" / "cartoon" like. Natural look (at least in my view) - would have much more gradual transition and almost ghost like appearance of outer reaches of the galaxy.
  13. Maybe you confused SNR with noise. SNR is signal to noise ratio. It is important for signal to be stronger than the noise in order to be able to show it (see things in image). High SNR means that signal is significantly stronger than noise. That is only true for hypothetical camera that has read noise equal to zero. Problem is that we don't have such cameras, and even best camera has some read noise. If you image for an hour and do a single exposure - you will have all the things that depend on time (signal and noise) accumulated and one read noise "dose" added to it as you read the sensor only once. Now imagine you instead take two half hour exposures - everything will be the same - except this time you'll have two read noise "doses" added. Similarly, if you take 60 exposures - each one minute long - things will add up and only difference this time will be that you have 60 read outs - so read noise is added x60 times (you can see how if we have read noise be equal to zero - then any exposure length will produce same results - since 0 x 60 is still zero). You might think that adding x60 or x120 read noise doses makes a lot of difference - and sometimes it does and sometimes it does not. It depends on other factors - because noise does not add "normally" like signal does. It adds like linearly independent vectors (square root of sum of squares). This form of addition has interesting property - if two vectors you are adding have significantly different magnitude - result is not much different than larger vector - you can see this in following diagram: Hypotenuse is almost as equally long as longer side in above right angled triangle, yet we calculate it by square root of sum of squares of sides. That is why we talk about "swamping" the read noise with some other noise source - like LP noise. Once there is significantly larger noise than read noise - total noise will be just a tiny bit bigger than that larger noise source and read noise contributes very little. That other noise source can be - Light pollution noise (most often in deep sky imaging). It can be thermal noise - for uncooled cameras or it can be target shot noise - which is case in planetary imaging (planets are way more bright than DSOs).
  14. That is very interesting solution. How well will clinometer work if OTA is pointing up (in that case - it won't start in horizontal position and we only need it to measure angle along one axis).
  15. There is always a combination of the two 3D print simple mounting ring on which you'll stick self adhesive ruler or regularly printed granulation.
  16. Yep. Another way to think about it is - if I have 1 hour, than I need 3 more hours to double SNR. After that if I want to double it again - I'll need additional 12h (to make it 16 in total) to double it again. Next doubling would need 64h total - so each doubling gets progressively more "expensive" - compared to base time.
  17. Depending on how DIY person with 3d printer is - here are some alternatives: so you can write marks on 3d printed part ...
  18. There are two things to consider when using very short exposures: 1) ability to stack 2) level of read noise Read noise is the only thing that makes the difference between say 3600 x 1s stack and one single 3600s exposure. All other noise sources grow with time and if you add two 10 minute subs - it will be the same as one 20 minute sub - except for read noise. Read noise does not grow with time - but instead grows with number of subs (each time you read out a sub - it gets one "dose" of read noise). If read noise were 0 - we could use arbitrarily short exposures - as long as we are able to align them for stacking. In order to stack subs - we must be able to align them. If you image faint stuff - you won't have any data to align on in short sub. If all the stars in your frame are faint - at some point, short enough exposure simply won't capture enough light from those stars to be able to do alignment properly. Planetary imagers don't have this problem as planets are very bright - and they regularly use exposures that are few milliseconds long. That is exactly right - quadruple total imaging time to get x2 increase in SNR. By the way - x2 increase in SNR is not insignificant - it can be difference between unable to detect object and detecting an object (it is said that SNR of 5 is needed to positively identify object, so if you have SNR of say 3 - raising SNR by factor of x2 will make that 6 - and over the detection value). By the way - they never become moot unless you can't align frames any more. Stacking more subs - however short they are, increases SNR the same way as stacking long subs. If you stack 100 short subs you will improve SNR x10 over single short sub. If you stack 100 long subs you will improve SNR by factor of x10 over single long sub - works the same regardless of sub duration.
  19. Maybe best approach would be to make grooves rather than markers that are tall - that can make them narrow without having to worry about accidentally damaging them (single width extrusion is not exactly strong and can break off easily). You'd need some sort of paint to make marks visible enough (think miniature painting). If you want to print letters and numbers - then they need to be a bit larger, and there is new feature called Arachne slicer engine that is supposed to be much better at fine detail - worth having a look.
  20. I think that you might be seeing a bit more than it is actually in the image. That is quite understandable, after all, our brain is hardwired to recognize patterns, and sometimes we are "too good" at it. There is however phenomena that might explain part of it. when things swirl around in galaxy disk - they tend to align themselves sort or parallel, so there is no wonder if dust filaments are on average aligned along some direction if you look at part of a galaxy (spiral one at least).
  21. Not really sure if that is going to be possible. Even if it is possible - it won't be easy on regular printer - you would need one capable of printing with two filaments. Marks that are in same color are rather difficult to spot so you need two color printing. There are tricks to do filament change on printer with single extruder and in principle - you can "write text" or do similar things on top of print - so that can be used, but I'm not sure about precision. You say that you have 77mm diameter and you need single degree marks? 77mm diameter will have around 242mm of circumference. That divided with 360 degrees leaves about 0.67mm per "tick" / "mark". That is very small for regular FDM printer with 0.4mm nozzle. Positioning of the print nozzle is not a problem - it can position itself with enough precision - it is the width of extrusion that is the problem - you would have something like 0.4mm mark (single extruded line) then 0.27 gap and so on. Too close for comfort really. You could switch to 0.25 or even 0.2mm nozzle and then have extrusion of up to 0.3mm in width so in principle you could do 1 degree marks on such circle, but I'm not sure if it would be the best looking. I think that much better option is to find someone with desktop cnc router. That can cut plastic sheet to wanted dimension and then use laser as engraving tool to put marks on it.
  22. Could you somehow mark the regions of interest in small image - just some sort of pointer so I can identify what you mean.
  23. Sure you can if you wish. Coma is related to base F/ratio of instrument - not combined ratio (although when you "zoom in" - you are using smaller central portion of the field - just keep planet on optical axis). For planets - you can completely avoid it if you have a good collimation of your scope and you keep the planet centered on the sensor. Use ROI and position ROI so that it is in center of the sensor rather than top left corner or whatever default location is (just check it and move it in center - in SharpCap you do it with mouse on ROI preview window on right side). Barlow offers more flexibility than telecentric lens as you can dial in wanted magnification - F/ratio by varying distance of barlow element to sensor. With telecentric lens - it does not change much if at all with distance. Telecentric lens is better if you have need for it visually (don't want to push eye relief further out) or if you plan to do some H-alpha solar imaging (for that purpose it is better than barlow). Finally - with respect to F/ratio and planetary imaging. Higher F/ratio won't help you capture more detail as telescope is limited by physics of light, but some people prefer to capture on higher F/ratio as it allegedly helps them with processing. Drawback of using higher F/ratio is that you get lower SNR per sub as light is spread over more surface. This makes it more difficult for stacking software to determine good and bad subs (too much noise will look like detail), it makes it more difficult for alignment points to do their thing (bigger alignment error) and finally it reduces total SNR of the stack for the same recording (need to capture and stack more good subs when too zoomed in).
  24. Telescope can't resolve past certain point and that point is calculated as: f_ratio = 2 * pixel_size / wavelength (https://en.wikipedia.org/wiki/Spatial_cutoff_frequency) Wavelength for visible light is 400-700nm range (or 0.4-0.7um if we use micrometers for both wavelength and pixel size). For 0.4um wavelength above formula transforms to f_ratio = 2 * pixel_size / 0.4 = 5 * pixel_size. That is absolute maximum that aperture can resolve even in absence of atmosphere. In any case - it is very decent barlow as far as price/performance metric goes. Coma will depend on speed of your scope, but "coma free" field is not very big - only absolute center is free from coma. (source: https://www.telescope-optics.net/newtonian_off_axis_aberrations.htm) ASI224 has 6.2mm diagonal, or - corner is 3.2mm away from center. At F/6 - even at 2mm away from center you get tear drop spot diagram that exits airy disk circle.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.