Jump to content

vlaiv

Members
  • Posts

    13,263
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. I'd say above is correct pronunciation (watch the clip and listen to their names).
  2. Hi and welcome to SGL. Good advice already given. I would say that best visual planetary telescope that we could still call portable is this one: https://www.firstlightoptics.com/dobsonians/skywatcher-skyliner-200p-dobsonian.html It would be a good idea to either be more specific in that "Want to take pictures of the moon and planets" part or drop idea of photography altogether. It is really very different / demanding field and if you are serious about it - good planetary camera and laptop will eat your budget alone. In any case - it would be a good idea to inform yourself of what lunar and planetary photography is all about - shooting techniques, gear needed and processing. Another vague point is "portable". I would call 8" dobsonian portable telescope in more ways than one - I transported it to dark site in my car with ease. I carry it around my back yard small distances assembled. It takes only two trips to take it from my basement out in my back yard - one carrying dobsonian mount and other carrying OTA. Neither requires great effort. On the other hand - there are people that really mean grab'n'go type setup that you can just pick up assembled in one hand and carry it longer distances with ease, or pack everything down into a smallish bag that fits car boot with room to spare. What sort of portable do you have in mind?
  3. In the end I want to show how to use Zernike polynomials to generate different level of aberration on PSF. First check out https://en.wikipedia.org/wiki/Zernike_polynomials These are expressions that one can calculate on unit circle surface and are basis to describe any surface on unit circle. They are interesting to us because they are used to describe wavefront aberrations that telescope optics produces. Let's say that we just want to see what sort of pattern will we get if we defocus our aperture. Defocus is given as: It solely depends on rho (distance to center) and not angle. We start by making our aperture - intensity part: Next we need to create phase part: You should really normalize rho and then determine how much defocus you want to do in waves, but we are going to just go with above formula and see how much defocus we get (change number multiplying d/40 in above formula to change level of defocus). Now we have intensity and phase. We need to have real and imaginary part. Real part = magnitude * cos(phase) Imaginary part = magnitude * sin(phase) For this we can use image multiplication and math functions of sine and cosine on our magnitude and phase images. And we get these as result: Left is real part, right is imaginary. Now we again use FFTJ but this time we use both real and imaginary parts: And if we apply gamma for sRGB displays, we will get image that we would visually have at a telescope: Defocused star into two rings ...
  4. https://imagej.net/Fiji and you should include FFTJ plugin https://sites.google.com/site/piotrwendykier/software/parallelfftj You can also look at Aberrator 3.0 - old software that does this pretty much automatically for you. http://aberrator.astronomy.net/ I have no idea about Newtonian calculator that you are mentioning. I have not seen such a thing.
  5. Here is example of how to make airy pattern and MTF of obstructed aperture. First make aperture pattern in the image: Here we have 25% central obstruction. Do FFTJ and use Power spectrum to get Airy pattern: You can see that some of light has moved into rings (it is not linear, some rings get more light than others). Do another round of FFTJ on this image now and use Frequency spectrum: This time MTF looks a bit different, and in order to really see what it's shape is - we need to plot it's profile: We've got that "obstruction belly" in our graph! If we compare it to online graphs for central obstruction: We see that our perfectly corresponds to 25% central obstruction one (as we used 25% central obstruction by radius).
  6. Here is simple version which we will elaborate. Airy disk or more precise in this context - PSF of optical system is related to Modulation Transfer Function - MTF - which in turn explains everything you need to know about contrast. MTF is graph of frequency attenuation - a bit like old audio equalizers - where you can set volume on different frequencies - low, medium, high. It looks like this for perfect clear aperture: What does this mean? Vertical axis (that goes from 0 to 1 - don't mind actual numbers above) represents attenuation factor. When line is high close to 1, that frequency remains almost as is. When line is low - that frequency is attenuated. What actually happens is that particular frequency is multiplied with height of above graph at that point. On horizontal axis - it is spatial frequency in cycles per unit of length. At focal plane we are talking about cycles per mm (or µm), in image we are talking about cycles per pixel. But what are those frequencies that I'm talking about? This is crucial for understanding resolution and contrast. Image is 2d function of intensity of light over surface (or can be viewed that way). Every function can be represented as infinite sum of sine and cosine waves via Fourier analysis of that function. This image illustrates that principle well: We have black line which is square impulse function and we think - it is square, no way it can be composed out of sine/cosine waves, but here it shows each successive step - we add finer and finer sine/cosine function to base wave and we get better and better approximation of this black function. Here we added only 5-6 terms and it's already looking pretty good - imagine having 1000 or more terms - we would not be able to tell the difference visually (there is of course still mathematical residual unless we sum all the way up to infinity ). Yes, but what this has to do with contrast and resolution? Well, think what contrast is - it is difference between peak and valley intensity in our image - larger this intensity difference - higher the contrast. Multiplying sine function with value less than one - reduces contrast of that sine function: Red is the graph of 2*sin(x), black is graph of sin(x) and blue is graph of 0.5 * sin(x). Which one has the most contrast and which one has the least contrast? If you have trouble seeing it - imagine that 0 is gray and large negative is more black and large positive is more white. Back to resolution - we now see that MTF operates across different frequencies - it attenuates low frequencies only slightly and high frequencies very much - it even cuts off all higher frequencies that certain point (this is resolution limit of the telescope - everything above that point will have 0 contrast - be just "single color" - "grey without any features"). In order to understand it better - here is example of how different frequencies are affected differently by PSF/MTF (by the way - convolution with PSF is the same as multiplication with MTF and MTF is fourier transform of PSF - if you want to know math details of the process). Here is one image that I created from simple sin functions: Each row has different frequency/wavelength. Rows at the top have lowest frequency / highest wavelength and rows at the bottom have highest frequencies - lowest wavelengths / distances between max and min value of sine function. I'm going to convolve above image now with Airy PSF used to produce MTF that I posted at beginning. Look what happens: Lower frequencies keep their contrast, but higher frequencies gradually loose their contrast - higher the frequency, more contrast loss there is - until they fade to grey / loose all contrast. We can examine impact of Airy disk by doing the same on any image - convolution of image with airy disk. How do we generate Airy patterns for these simulations? Take ImageJ/Fiji, download FFTJ package. Open new 32bit image - 512x512 pixels and run following math macro on it: This creates our aperture. For unaberrated / clear aperture - just use this. If you want to simulate effects of central obstruction - make appropriate macro, for example: if (d < 20) v=0; else if d(<40) v=1; else v=0;. If you want to add aberrations - it is similar but a bit more involved - you need to make phase image as well and then decompose intensity (0 or 1 that we made) together with phase diagram into imaginary and real parts and do FFT on those. We can later do that to produce coma for example. Next step is to launch FFTJ plugin and produce Power spectrum of this image: Use above as real part, leave imaginary as none for this (we can use both real and imaginary for aberrations - will show later). Then choose this options when you get calculation result: This will produce nice Airy pattern: Btw, size of Airy disk here is inversely proportional to diameter of aperture and both depend on size of the image (which you should keep at powers of two - so 256, 512, 1024, 2048, ... for fast and precise calculations and no scaling). Now, if you want to do MTF of this - you then do another round of FFT on this image, but this time - select frequency spectrum instead of power spectrum. Make sure you select right image: Select frequency spectrum: And you get this: Which is 2d graph of our MTF. Select line tool and Plot profile to get actual graph:
  7. Don't worry about camera being faulty - this is actually feature of the sensor used in this camera. Only drawback of shooting at high resolution is amount of data you have to store prior to calibration, but once you calibrate your data - you can easily bin your subs x2 increasing pixel size and reduce amount of data stored.
  8. May I point out that giving single number as contrast simply does not make sense in context of contrast/resolution of telescope? I can walk you thru simple method of determining telescope MTF - which is graph of contrast vs spatial frequency in Fourier domain - without need of actually knowing how Fourier transforms work.
  9. I know that your mount is certainly capable of ~0.2-0.3" RMS values - but let's face it, only couple of mounts available to amateur community can do that (Mesu, 10micron, etc ...) and even that requires very good seeing with 80mm scope to be able to use resolutions of ~1.6"/px. It is far easier to achieve those resolutions with 6" scope as Airy disk size has much smaller impact on total FWHM with larger scope.
  10. I know, however, that calculator is simply flawed. You can check this yourself, since you have 80mm F/4.4 scope. Take any of the subs you taken with that scope and measure FWHM in arc seconds. You will find that most of them are above 3" FWHM. Above calculator assumes that seeing is all there is to star FWHM - but in reality it is only one of couple of components that impact resulting FWHM of stars. That is first problem with said calculator. Second problem is conversion from FWHM to sampling rate. There is rather simple conversion between the two - factor of x1.6. Divide FWHM of your star with x1.6 and that gives you ideal sampling rate for that level of blur. Let's say that you really have 2" seeing, and you have excellent mount that is capable of 0.3" total RMS error, with 80mm scope your resulting FWHM will be 2.54" and hence ideal sampling rate will be ~ 1.58"/px. Even in best case scenario (mind you, 0.3" total RMS is premium mount territory - not something you can do with HEQ5 or EQ6 and more expensive mounts will often struggle), optimum sampling rate is far away from lower bound of 0.67"/px suggested by above tool. More realistic scenario of 2" seeing, 1" total RMS guide error and 80mm scope will give you ~3.4" FWHM or 2.125"/px sampling rate.
  11. You do know that it transfers via SGL thread participation as well? As soon as you participate in a thread when someone shows off their new gear - at least of couple of cloudy days is added on your tab
  12. I think that 2.3µm pixel size is going to be useful only in these two cases: - planetary imaging with ROI - single pixel should have low read noise - about half of that of "whole" pixel (as whole pixel is just summed 4 small single pixels) - possibility to do x1.5 bin in relation to regular pixel size without pixel to pixel correlation. When you download 2.3µm image - you can do 3x3 bin and that will effectively give you ~7µm pixel size - not something you can get from 4.63µm with integer binning With short focus scope or camera lens - that pixel size will over sample a lot. Let's say you want to do higher resolution image at 1.5"/px. With 2.3µm pixel size, you need 316mm of focal length. Imagine that you have rather fast small telescope at F/4.5 and it has 316mm of focal length - how large aperture is that? That is 70mm of aperture. Calculated size of airy disk for 70mm of aperture is 3.67", so you see, it does not really make sense to go with 1.5"/px as airy disk size itself is guarantee that you won't resolve enough to go with 1.5"/px You'll then say - ok, let's go with more reasonable 2"/px, but then, needed focal length will be ~237mm, and even if you have F/4 scope - you'll have small aperture at about 60mm or less to get that focal length. Guess what? Airy disk size now jumps to 4.28" - again too large for selected sampling rate ... In any case, you are over sampling. With lenses, things are just worse than with diffraction limited scopes as lenses are not diffraction limited and star blur is larger than airy disk because of that (often much larger) - lens often require pixel sizes to be around 10µm or larger in order to get properly sampled sharp images.
  13. Because they used ASI294 name for it for some strange reason Color camera uses IMX294 color sensor. Mono camera uses different sensor all together - IMX492. It looks like it all started with IMX294. This sensor has very strange structure - it has double bayer matrix and this is used in special mode: It actually has four times more pixels - that you can't access as individual pixels. Sensor either does normal mode where it sums 2x2 groups of pixels or does HDR mode where there are pixels that do short integration and pixels that do long integration - and they are summed on camera to produce HDR image. It looks like IMX492 kept pixel sizes but skipped on that HDR thing and you can access them in single and "double" mode (one that we would call normal mode). I'm not sure what is better thing to do. I would say go for single pixels, but that will create massive files for storage and processing and I'm not sure if you'll get any benefit as pixels will be tiny at ~2.3µm.
  14. I'm not even going to ask why would one want to do this
  15. I don't think you really need tele lens - just regular CS lens with a bit more focal length. This is because sensor is already very small and it has small FOV. Look for both CS and C-mount lenses. You need 5mm extension for C-mount lens to be mounted instead of CS mount lens (since you have t2/c mount thread adapter - just add 5mm of optical path in T2). Most CCTV equipment uses those kind of lenses. E-bay is good place to hunt down these. This one does not look bad: https://www.ebay.nl/itm/Fujinon-TV-Z-LENS-Zoom-14-70mm-f-2-H5X14-mit-C-Mount-Gewinde-gebraucht/363074977444?hash=item5488f47ea4:g:dMUAAOSwWplfNQPL Or perhaps this one as really cheap option: https://www.ebay.nl/itm/Tokina-CCTV-LENS-25mm-f2-0-C-mount-Lens-For-Various-Formats-Including-16MM-film/174517944950?hash=item28a2148276:g:TdMAAOSwnwlfrywB Of course, you can browse whole lot and select one that you like the best: https://www.ebay.nl/sch/i.html?_from=R40&_nkw=c-mount+lens&_sacat=0&_sop=15&_pgn=1 I searched for c-mount only - but you can also search for CS-mount as well.
  16. Depends on finder shoe construction. This one has "raised" floor - which means that sides will bend inward. One with "flat" floor would bend into shape of OTA and hence sides would bend outward.
  17. I think it will make absolutely no difference in normal operation mode. It could be that this is just glitch in drivers as drivers are expecting OSC version and getting Mono version or something. Maybe new version of drivers will sort out this issue. It could also be that firmware for camera has binning for OSC mode somehow although sensor is mono. In any case - can you shoot normally at regular resolution and get proper image (no binning - just regular shooting)? If yes, then I suspect you don't need to worry about any lack of internal binning.
  18. Oh it does. You can bin it in software without any issues - bin by x2, x3, ... what ever you like. Only problem is with debayering and binning as I described briefly above - it is only important for OSC sensors (and I thought this is OSC camera). Binning in software still stands as better option.
  19. I would say that preferred way of binning cmos is in software rather than in drivers / on chip, so you won't loose anything by binning in software after calibration and debayering. In fact, because this is OSC camera - first debayer without interpolation (super pixel mode) and this will give you already twice reduced sampling rate and then decide if you want to bin additionally. Binning interpolated pixels actually does nothing as both binning and interpolation are sort of average value - and averaging the averaged value won't change anything.
  20. I think that focuser tube protruding is the easiest thing to test - one just needs to see if it indeed goes deeper than baffles in OTA. I found this image of Orion 8" Astrograph and it suggests that it might be focuser indeed. With visual newtonians focuser is at 45° to spider support. Here it is almost parallel to them - which means that diffraction from it will be almost parallel to spikes - and indeed on image it is: It looks like star has regular spikes and then these larger spikes are in just one direction - they are not as sharp (means slightly curved surface) and at a slight angle to spider support. I could be totally wrong with this focuser thing - but I believe it is easiest thing to inspect - check to see if spider looks as it should and how much is focuser tube sticking inside OTA when focused at infinity. Maybe also checking to see if it is highly reflective. If it is - it could be worth painting in black.
  21. Look at above graph - fact that you have material with different density than air and that light beam is converting - means that it moves focal point as light bends on air / glass transition.
  22. This also answers some of my questions. You are using guide scope, so no OAG/ONAG - that is good, one less thing to worry about MPCC III should not stick on the other side of focuser tube like some longer coma correctors - like this one: Only other thing that I can think of would be focuser tube protruding far inside ota and blocking some of the mirror. Can you check how much is it sticking out inside tube when you are in focus?
  23. I don't think it is anything "standard". Collimation might be a bit off - but it is not the cause of this issue. Similarly, tracking/guiding might be a bit poor and there is slight elongation of the stars - but again, it's not what is causing this. Here is a close up: What is really confusing is the fact that that strong vertical spike is both diffuse (not sharp as horizontal, and first part of vertical spike) and clearly showing spectral separation. What glass elements do you have in optical train? Do you by any chance have ONAG or maybe strangely large prism OAG or anything similar. Second question would be - when you are in focus, could it be that either focuser tube or maybe end of coma corrector (glass element) is sticking inside the tube in light path? Could this be far inside tube when you achieve focus with your setup?
  24. Still not got scope from this thread, but it is still on my list. At the moment I've got two 4" scopes - Skywatcher 4" F/10 Evostar achromat and SW Maksutov 102mm. I'm rather happy with Mak, it sits really nicely on AzGti mount and is sort of my grab'n'go lunar scope, but due to commitments and weather - has not seen much use (I did manage to do some observing and imaging with it this year). On the other hand, I planned to do a lot with Evostar 102/1000 achromat, but it's seen even less use than Mak. At the moment, plan is to play around with F/10 achromat in various roles: - various observing roles - white light solar - DSO imaging scope - lunar / planetary imaging scope - spectroscopy scope - possibly Ha solar (that is also on hold for the moment due to involved spending) I want to put it thru its paces as general purpose scope and see the max that I can get it to deliver in each role. It will require some spending (that I would otherwise do anyway) - like Skytee II mount, AZ4 is simply not fit to carry such a long tube (I'm not happy with vibration dampening time) and of course Solar Ha filter (quark combo). Thing is - I'm sort of in the middle of building a new house and I expect to be moving in spring. Spending is therefore severely limited and I'll have access to much darker skies once I relocate (obsy is also being built) - so everything is sort of on hold. In any case - I ended up not getting 4" ED apo and it still might be some time before I get to that - two contenders at the moment - both 4" F/7 TS photoline versions, one cheaper with FPL51 glass and more expensive one with FPL53 glass. First option is in case I just decide to have it as general purpose all round visual scope and keep Mak102 for grab'n'go planetary. Second option will be if I decide to just have one scope to cover multiple roles.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.