Jump to content

vlaiv

Members
  • Posts

    13,265
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. You significantly lower SNR if you keep your data oversampled. If you really want to have "more pixels to work with" - why don't you simply sample at proper sampling rate so you don't waste SNR and then enlarge image while data is still linear - before you start to process. Result will be the same - "more pixels" to work with and level of detail will be the same - except you won't loose SNR by over sampling.
  2. Stars in corners as far as you can place them (like 50px from each side), just make sure you place them in more or less the same position in each of the corners. 3/4 is when you are imaging and you want to achieve good focus all over the frame. You then focus on single star that is about 3/4 away from the center. If your autofocus routine examines all the stars in the sub - you don't need to bother, it will find best focus position so it minimizes star sizes all over the place, but if you focus on a single star - then don't choose one in the center of the sub but rather ~ 3/4 out (or 2/3 - no need to be very precise about it). Yes, but make sure you have doughnut centered in your frame - like dead center. On a separate note, since you have focuser tilt to adjust as well, and above procedure does not address that - here is what I would do. You adjust tilt of focuser as a first step prior to collimation of the rest of the scope - you'll need some sort of laser that is collimated itself - that you can put in your focuser. Procedure is simple - laser spot should hit dead center of secondary mirror and remain there as you rotate laser while in focuser (like rotating eyepiece that is placed in focuser). If dot moves when you rotate laser - laser is not square to focuser so you need to fix that - otherwise - if it's hitting center of the secondary - you are done.
  3. Lunar rate is as good at tracking the moon as is sidereal for stars. If your stars don't move in 5 minute exposure - moon should sit within FOV for literally hours. I think your lunar tracking rate is somehow messed up - maybe it is not activated or something like that.
  4. I actually did mine with camera as I find it much easier. Just take your imaging camera - not guide camera so you can move collimation star to far corners of the sensor (guide camera has small sensor and you won't get good collimation that way as measurements won't be as sensitive). I started by looking / reading thru this tutorial: https://deepspaceplace.com/gso8rccollimate.php and things simplify down to what I've written above: 1. Secondary - star in center, concentric defocus 2. Primary - stars in corners - all equally good in focus Tutorial itself calls for bahtinov mask - but I never liked it and preferred FWHM measurements instead.
  5. In order to judge collimation of secondary - place star in center of the FOV and defocus - but do it only slightly. With very large defocus like in your image - you won't be able to spot the small difference in thickness of the sides of the doughnut. I found one of my old images of defocused star - it was not for collimation but rather for telescope testing so I had to defocus it just right amount: This is still too much defocus. You want very small defocus shown in the image above and image of defocused star should look something like this: You should clearly see outer edge and few rings. In your image above - there are plenty of rings and they have fused in solid surface - much like 30 wave defocus image on the diagram. My image is about 15 or so waves - so resembles 14 wave image on diagram - note that diagram is image of newtonian so secondary is much smaller - but we don't care about that but rings - in my image you can see some rings separation - same as in 14 wave example - you want to go down to just 5-6 waves. Do be careful - seeing will start to play havoc on these scales - more you defocus, less seeing effects will be seen on defocused star profile - but here (as shown on diagram - outer edge is not perfect circle and will be dancing a lot) Also notice that diagram - 30 wave defocus looks round and concentric - but look at 5/7 waves - it is obvious that things are not concentric anymore - bottom side of doughnut is thinner than top side (actually caused by thermals in this image). This is all corrected by adjusting secondary - while star is in the center. You adjust primary by making sure that focused star stays focused in each of 4 corners. Here you can use FWHM / HFR tools to obtain same number after you slew scope so that test star hits each corner. Do couple of rounds - adjust secondary - adjust primary, primary, secondary,... as when you adjust one - other needs additional tweak as well.
  6. These can be handled with DSS hot pixel filtering as well as using sigma clip stacking. In either case you need to restack your data.
  7. Well, here we go: This should slide onto follow focus ring and provide outer GT2 profile (I'm not sure if it will match properly as I modeled follow focus gear teeth as standard involute gears with 20 degree pressure angle).
  8. If collimation is good - it will stay good when you add reducer. If not - then it will look worse. This is because reducer "squeezes" in more physical focal plain onto sensor surface and further away from center you are - issues are more noticeable (even if collimated perfectly - edge of field aberrations will get worse with reducer). Do bear in mind that you already have APS-C sized sensor. Using 0.67 reducer is like putting 42mm diagonal sensor onto the scope. Do you really expect field to be good all the way to 42mm? According to TS - scope offers correction only up to APS-C sized sensor - so in reality up to 27-28mm diameter. What you'll end up doing with reducer - is simply cropping away anything pass those 27-28mm and what is the point in using the reducer when you'll end up using the same field anyway? You'll say - but I'll have "faster" system, and I'll say - speed of the system is resolution at aperture. With both reducer and without, you'll need to fix your working resolution with binning as you will be oversampling, and you'll have same usable FOV. You'll end up with same resolution, same FOV and same speed. In that case - I see no point in using reducer. It makes sense to use it only if you use smaller sized camera (like 4/3" or 1") to utilize more of the FOV. Dedicated flattener / reducer is another matter. Something like this: https://www.teleskop-express.de/shop/product_info.php/info/p11145_TS-Optics-RC-0-8x-Corrector-Reducer-for-Ritchey-Chretien---big-sensors---M68-connection.html although not cheap, will allow you to use even full frame sensors, and most certainly APS-C sized one.
  9. @Jay6879 I see that you are confused Here is simple equation F/ratio = 2 * pixel_size / wavelength Where wavelength and pixel size are in same units - say micrometers. For wavelength you should put 0.5um (which is the same as 500nm - wavelength of green light), unless you have reason to use other value. If you do that - equation simplifies to F/ratio = pixel_size * 4
  10. I tried doing that and it turns out that pitch is 2.5mm or there abouts and there is T2.5 belt system, but not sure if it will fit as it should. 2GT or GT2 (which ever is proper naming) seems to be more popular, and I want a bit of practice for 3D modeling
  11. Spatial cutoff frequency: https://en.wikipedia.org/wiki/Spatial_cutoff_frequency + Nyquist If you rearrange things, put two pixels per cycle as per Nyquist and plug in say 500nm (being good wavelength to represent visible spectrum) - you get that F/ratio is pixel size * 4 (actual formula is F/ratio = 2 * pixel size / 0.5um - because 500nm = 0.5um and you need pixel size and wavelength in same units so 2 * pixel_size / 0.5 = 4 * pixel size) If you want to solve for particular wavelength - like when using Ha narrowband filter or IR pass filter - then you would use actual wavelength of interest. For Ha that would be F/ratio = 2 * pixel size / 0.656 = pixel_size * 3.05 (because Ha is 656nm)
  12. They look small in context of full FOV - but do crop and keep them at 2"/px and they won't look small any more.
  13. @Annehouw It is indeed very nice image, but I have to say that I'm rather confused with the detail in description. It mentions IMX 290 based sensor which has 1920x1080 resolution and image itself is 3373x1658, so I'm guessing it is mosaic? Furthermore - 300mm F/4 scope with x3 barlow would give 3600mm of FL and with 2.9um pixel size - that is, like you mentioned 0.166"/px. Yet image is 0.093"/px when measured on the image itself. This sort of corresponds to increase given by ratio of widths of sensor and image? It's not mosaic after all? 3373 / 1920 = ~1.75 and 0.17 / 0.093 = ~1.78 Why would image be enlarged by x1.75 when F/12 is critical sampling for 2.9um pixel size in green? There is no sense to do that. Maybe we should examine if image indeed resolves things? Let's concentrate on that shell bit I outlined. In image it looks like single prong with a small bit to the right. How about fully resolved image at this resolution? world of difference ... Now, don't get me wrong - it is excellent image, and far better than can be obtained with long exposure - but it is far cry from resolving at sampling rate telescope aperture alone would suggest. In my view - this is the scale at which image starts to be close to fully resolved: left is original image and right is reference. Even at this scale - reference looks a bit sharper than original, and this is about 0.7"/px Extraordinary resolution by the way - and better than anything possible with long exposure and amateur equipment, but again - not near to what 12" scope can resolve without influence of atmosphere.
  14. Very nice images! You were right to stack barlows (although just using x3 at increased distance would have done the trick). NexImage burst has 3.75um pixel size and ideal F/ratio is F/15. Starblast is F/4 telescope and it needs at least x4 amplification to get close to F/15. Maybe for planets, but lunar does not have much color so you'll be fine with mono. You can also experiment with filters if you have any. Narrow band filters suppress seeing effects and are beneficial for lunar. Link I posted is to a thread here on SGL where I originally posted the image with all the capture detail. It opens fine for me - both the SGL thread and also the image posted on that thread. Not sure why you can't see it. Here is the image uploaded again here: Do right click / open in new window so you can zoom in fully to see the detail.
  15. I've downloaded and installed FreeCAD software in "preparation" for 3d printer purchase (not more than few months now, just few more installments and I'll be debt free ) and I sort of learned the basics last night and want to do something useful as a practice. Here is what I have in mind - bits for autofocus unit for my Samyang 85mm T1.5 lens. So far I've gathered that GT2 standard belts and pulleys are easiest to source for this purpose. From what I was able to find online - follow focus ring is 0.8 module with 102 teeth. Samyang specs say that focus ring is 81.6mm and 81.6 / 0.8 = 102. I want to make sort of "sleeve" - that will fit onto focus ring and have proper GT2 profile on the outside, but the problem is - I have no idea what sort of shape those teeth have. Best I can do is this image: It looks like involute profile? What about pressure angle? How to get this information? Any pointers are welcome.
  16. That really depends on pixel size, but most likely - you won't need a barlow. If you image with Zenith star, do you possibly guide? And if so, with what guide camera? Guide camera is much better option for lunar / planetary imaging. C90 is a bit shy of being F/14 scope (F/13.89) and that is very well suited for ~3.45um pixel size. So any camera with say 3.3 to 3.75 micrometer pixel size can be used without barlow on C90. If your DSLR has for example 4.3 - it would need F/17.2 and that means x1.23 barlow - not really feasible thing (you can get x1.5 barlow element and try reducing sensor / barlow distance to get to x1.23). In any case - look up lucky imaging technique. This lunar image was taken with 100mm Maksutov telescope (so just 10mm larger aperture than c90, but practically the same focal length):
  17. Many websites optimize content that you post. It might not seem like much - but imagine millions of people viewing hundreds or thousands of posts every day - that is massive amount of data to store and transfer and every bit of saving counts. Best way to tackle this is to have your own website / online file server and then post only links to your own media. Alternatively - optimize images that you post on other sites yourself. Hopefully - engine that deals with media on particular site will conclude that it is already optimized and won't waste resources on it (although there is no guarantee for that). Mind you - most people have very small phone screens and content served to those devices does not need to be high quality, so websites optimize such images further for both speed and bandwidth savings (many people don't have flat rate access on their mobile devices but rather metered connections and appreciate optimized content).
  18. That collimation needs a bit of tweaking to get it fully right. These are top left stars: Not very round. These are center stars - much better: While RC has rather flat field - it is not fully flat and best thing to do is focus on stars that are about 3/4 away from the center (2/3 to 3/4). If you get these to be as small as possible, stars in other parts will be also good (provided you have collimation ok). Bottom left corner shows some defocus (that is primary tilted): And of course that top right star has double spikes. With focal length this big - you really need to learn to bin your data in processing stage. Take your stack, while still linear and do integer resample to 1/3 of the size in PI (if that is what you are using) - that is bin 3x3.
  19. C90 will produce much better lunar images than Zenithstar 61. Due the way planetary / lunar images are captured (lucky imaging technique) and the way they are processed - even if C90 is optically lower quality than Z61 - it will produce better images (unless it's really a lemon or out of collimation). Level of detail resolved by aperture is proportional to size of aperture. 90mm is x1.5 60mm and there is no way around this. Just make sure you follow nice lunar imaging tutorial (look up lucky imaging on youtube, or ask questions here) and you should be able to produce some nice images with C90 and guide camera. (also check out lunar imaging section of SGL and see what people do with small scopes).
  20. Quite right - even shorter time is needed for visible part of spectrum. 10ms might be enough for 900nm of I filter but it's not going to be enough for 500nm of V filter (or other visible filters for that matter). Care to share some of your high resolution work? In second paper you linked - they managed to get 0.26" FWHM which equates to something like 0.1625"/px on 2.5 meter telescope that has diffraction limited resolution of ~0.0372"/px at 900nm (max wavelength of I band) or about x4.376 higher sampling rate. They themselves conclude that effective Strehl of such system is in range of 0.15-0.2 and that they image area of about 20" x 20" That is with telescope that gathers x100 more light than 10" amateurs consider a large telescope
  21. Well - almost. I don't think we can ever beat the seeing in lucky DSO imaging as we do with lucky planetary imaging. There are two fundamental problems with lucky DSO imaging. First is exposure time. To make stacking feasible with lucky DSO imaging - we need to use exposure order of magnitude of second - maybe one magnitude less than that - so 0.1s. That is still x20-x200 times longer than exposures needed to freeze the seeing - ones used in planetary imaging - which are order of 5ms. Second is size of FOV and alignment issues. Modern lucky planetary imaging software can choose to stack only part of the frame - one that is sharp / undistorted enough. Larger the FOV - more variance in seeing there will be across it. If you look at adaptive optic systems - they tend to operate on very small patches of the sky as wavefront deformation that adaptive optics corrects - is present only over small patch of the sky - like few arc seconds across. With lucky DSO imaging - we only have bright stars to align against. Shorter the exposure - less stars to choose from with sufficient SNR to provide us with good alignment. Even if we have enough alignment stars - we can only keep / reject subs based on quality of their star images. That tells us nothing about wavefront deformation between those stars where object of interest lies. Bottom line is - with planetary lucky imaging and very short exposures we can almost fully beat the seeing and exploit full resolving power of optics. With lucky DSO imaging - we simply lessen the impact of atmosphere but we can't really beat it in the same way we can with lucky planetary imaging. Effective resolving power of the approach lies somewhere between long exposure and lucky planetary imaging. We can beat the seeing for say, star systems - like double/triple stars that we want to split down to resolving power or telescope. This is because these provide enough signal much like in planetary so we can keep exposure even shorter and because these occupy small enough patch of the sky so that we can be pretty confident that PSF we see on single stars is the same PSF over the imaging field and that we can reject / keep frames based on that alone.
  22. Try to see if stars could match the image above. Inspect their distance in arc seconds to see if they can be resolved in your image and also inspect their magnitudes. If one of the three is significantly less bright - system will present as double star (especially if they are close and weak one is the middle one). Maybe try to resolve the system yourself? Try by using lucky imaging approach - sample at critical sampling rate with narrowband filter and very fast FPS. Stack in planetary software like AS!3 and use wavelets (carefully) to sharpen things and see if you can split the system.
  23. Oversampling is never rewarding What you are really saying is that lucky imaging reduces impacts of atmosphere and tracking errors and therefore higher sampling rates will be optimum in comparison with long exposure.
  24. I agree with this and would like to add the following: While we don't need very long focal length scopes - there is nothing wrong with using them as long as you pay attention to few things. I often hear that it is much harder to image or guide at say 2m of focal length. It is not. Mount error and achieved resolution really don't care about focal length you are using (size of telescope and its length will have some impact, but not focal length). As long as you tie yourself to sampling rate rather than focal length - you will be fine. There is no difference when imaging at 1.5"/px - if you are working with 500mm, 1000mm or 2m focal length. In the end - long focal length scope allow for larger aperture and hence more light collecting power while still being optically "easy" thanks to their slower F/ratio.
  25. Relatively new item. There is also triplet version. https://www.teleskop-express.de/shop/product_info.php/info/p14015_TS-Optics-APO-Refractor-96-576-mm---FCD100-Triplet-Lens-from-Japan.html As well as 85mm and 106mm versions (both triplets as well and based on FCD100). I think it is interesting scope, both for visual but also for EEVA with say this reducer: https://www.teleskop-express.de/shop/product_info.php/info/p7943_Long-Perng-2--0-6x-Reducer-and-Corrector-for-APO-Refractor-Telescopes.html F/3.6 system at 345mm of FL. For someone having ASI485 that would make it 1.75"/px at F/3.6, what more can person ask in terms of EEVA?
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.