Jump to content

vlaiv

Members
  • Posts

    13,265
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. I see. I just checked specs at TS website and there it says: This is bog-standard F/6.3 Celestron SCT reducer/corrector: https://www.teleskop-express.de/shop/product_info.php/info/p2644_Celestron-Reducer---Corrector-f-6-3-for-Schmidt-Cassegrains-Telescopes--SCT-.html Maybe you have EdgeHD version? That one is 105mm back focus (according to TS website): In the case you have standard one - then do pay attention that clear aperture is 41mm and with 0.63 reduction factor that translates into 25mm (at about 100mm away from focal plane), so even at 23mm you'll get vignetting. Putting prism too far out will place OAG sensor in vignetted region - which will of course reduce amount of light it receives. If you still have issues with your setup after shortening the distance and moving OAG closer to sensor - maybe consider sacrificing a bit of FOV (by cropping images) and pushing OAG prism closer to sensor (even if it starts casting shadow a bit).
  2. I was able to find that back focus is 85mm for this reducer - why go with 104.5mm? Further away you put sensor - more reduction you will get and hence more severe vignetting. Make distance between SCT corrector camera side thread and sensor about 85mm. If you don't want prism of your OAG to be effective aperture stop - put it close to sensor. Swap OAG and 21mm extension (or possibly loose 21 mm extension all together as that will take very close to right working distance). Don't be afraid to: 1) bin your camera (check out drop downs that I outlined in your version of PHD2) 2) use long exposure with OAG - like 4s guide exposure. I prefer to use ASCOM driver for my camera rather than native drivers.
  3. I just named it like that, it's not official name for it. If you want to learn more about that - look into real time systems. RPI is not real time system (linux in general), but can be made sort of real time with RT patch. Another good source is algorithms for games and physics simulations - where you also need to keep accurate time. With linux (or other non real time operating systems) there is no guarantee that next step will be performed at wanted time. There is always some sort of interrupt that can kick in and take away processing time needed for timing. For this reason it is best to just iterate in a loop and observe timer and as soon as current time is equal or larger (we are on time, or we missed by a bit) then time for action - we need to perform action. If you have series of delays - any sort of error will accumulate over time if say delay is not precise but instead finishes a bit late due to some interrupt. With above loop - precision will only depend on your arithmetic to calculate time of next event and precision of system timer (you can get accumulated error only if your next step calculation is nor precise - like using 0.075 seconds instead of 0.0747856.... seconds) It also has handy property - say unwanted delay happens (some IO related interrupt or something like that) - above algorithm will "fast forward" steps - it won't miss steps and start trailing behind. If there is need - it will perform 2 or 3 steps in fast succession to "catch up" with where it is supposed to be.
  4. That code style is not the best for what you are trying to achieve. Don't use pauses that you don't precisely control - use active loop for good precision (this will drain more power but you want as "real time" as possible). Here is pseudo code for something like this: while not done: if current_time>=time_of_next_event: perform_next_event // like make step forward or step backward time_of_next_event = calculate_time_of_next_event(current_time, time_of_next_event) else: // here you can do microsleep if you want to save power or just do nothing and resume next step in iteration As for timing - it is simple. Sidereal rate is 15.043"/s You need to calculate how many micro steps you have per revolution. For example - let's say that your worm gear is 180:1 and that you are using standard stepper with 1.8 degrees step (200 steps per revolution) and you have 32 micro steps. One full revolution will have in that case 180 * 200 * 32 = 1152000 steps One full revolution, on the other hand has 360 * 60 * 60 = 1296000 arc seconds. If you divide these two numbers - you will get arc seconds per step 1296000/1152000 = 1.125 arc seconds per step Now we have 15.043"/s and 1.125"/step we can then divide those two to give us timing information 15.043 / 1.125 = 13.371555.... steps per second or 1.125 / 15.043 = ~0.0747856s per step or 74.7856 milliseconds per step or 74785.6 microseconds per step In above pseudo code you would then put time_of_next_event = time_of_this_event + 0.0747856s
  5. If telescope is slightly under or over corrected and is not the best performer on the planets - should not impact much deep sky imaging. Difference in resolution between DSO imaging and planetary imaging is often x2-x3 and even if image is not that sharp on planets - it does not matter for DSO imaging as atmosphere blurs planetary level detail in long exposure. RC scope with larger central obstruction has the same performance as telescope with 1/4 of wave spherical aberration of the mirror (which is to say that RC is not diffraction limited / or nearly diffraction limited scope by design) and is rarely used for planets - yet is very fine for DSO imaging. But if scope is poor - you should probably look to replace it with something. If your aim is for FOV of 1 to 1.5 degrees that is 60 to 90 arc minutes or 3600 to 5400 arc seconds. There is rule of the thumb that your guide RMS should really be half of your imaging resolution. This means that with 1-2" RMS guide error - you can't image below 2"/px FOV of 3600"-5400" then translates into 1800-2700px image size. Your Canon 550D has 5184 x 3456 pixels so there is plenty of space to capture 1800-2700px in width. You can sample at each pixel and crop You can use bin x2 and even bin x3 (this will give you ~1700px in width). Pixel size of 550D is 4.3um so you have basic focal length for 2"/px of 443mm. From this I'd say to look for telescopes that are either 400-440mm, 800-900mm or 1200-1300mm in focal length. What is your budget? I'm thinking that you'll be best served with say 80mm F/5-F/6 ED refractor if you want to keep the weight down. You already have OVL FF, right? But that is non reducing, so might only work with 80mm f/6 to give you what you want.
  6. Yep, it is worth trying different distances as well if they can't be sure if 92mm is correct distance for Meade.
  7. It's not - it is just specialty item for SCT - it corrects all aberrations in design which happen to be field curvature and coma. It would not work as regular coma corrector on newtonian as it is specifically designed for SCT
  8. It is dedicated SCT corrector - which corrects for all aberrations inherent in SCT design - which are coma and field curvature. It is this one: https://starizona.com/products/starizona-sct-corrector-63x-reducer-coma-corrector
  9. Ok, so that there is a bit of a problem. You already have a scope that will provide you with nice focal length given your pixel size. What exactly do you mean by over polished mirror? Another important point is - what are you trying to achieve? You say you want to capture galaxies and planetary nebulae. Do you have any idea of working resolution (how "close in" do you want to get)? Or at least what sort of FOV you want for your images?
  10. It is SCT field flattener that has been used.
  11. Maybe check out a video on youtube of look up a tutorial on how to collimate SCT?
  12. @ollypenrice You have been known to take a snap or to in daylight, right? Do you start with completely raw image and then process it from start or do you start with what your camera provides? More importantly - if you take two cameras and you shoot same scene at very similar settings and save your images as 16bit for post processing - do they: 1) look pretty much the same 2) look pretty much as the scene you saw (if you exposed properly) I took raw file from the start and then with DCRaw - exported actual raw data from sensor: Left is raw image, right is what is usually presented by any sort of software that extracts data from the raw image. Note few things (right image is as opposed to left): 1. Image is flat fielded 2. Image is color corrected 3. Image is gamma adjusted (which is really stretch) When I'm talking about "auto" button - or automated mode - then I'm saying following: Same as with daytime photo and camera - there is certain workflow that prepares image to either be viewed or further post processed - nobody starts with true raw in daytime photography. Point of this standardized work flow would be to arrive from raw image (left) to image that can be viewed or further post processed (right) - but that will be a match between people with different setups if they are shooting same target, and to the best of our knowledge - it will be a match to what we would potentially see if we were floating in outer space and light from distant source was bright enough (in some cases - maybe as is). Same math and physics works in both cases as there are sensors and there is light. No difference except in few minor details: 1) In daytime photography we need to set white balance due to the fact that our brain adjusts to environmental factors (we see differently and expect color to match as we remember not as we view it side by side - it is not the camera that sees differently) 2) in daytime photography it is exposure length (and F/stop) that sets what is seen in image, in astrophotography that will be determined by SNR (and to some extent to total exposure as SNR depends on it for given setup) 3) we have atmosphere to deal with Other than that - process is the same. I just want to create scientifically based - sane / same starting point for different setups that depends on target only (and not on rest of the equipment or environment). What is done afterwards with that image is up to person processing it (pretty much as in daytime photography).
  13. I think that is really a long shot. Most optical elements work better in slow beam than in fast. I've never heard of a barlow or telecentric that is meant only for fast scopes.
  14. I think it is down to magnification and exit pupil. Say you put quark in F/10 scope - like 4" F/10. Your system will operate on F/43. Imagine now you put 25mm plossl that is often recommended for quark - you are instantly observing at x172 - which might be too much for Ha and seeing. Exit pupil will also be 0.58mm which will be probably uncomfortable for most people in daytime (even at night you'll see floaters and image will be too dim). Etalon itself will work better on slower beam - but setup as a whole might be too uncomfortable to observe. There is another possibility - maybe telecentric is optimized for F/4-F/8 entrance beam and does not work the best in slower beams (just guessing).
  15. I doubt there will be 5" or smaller RC - secondary obstruction will be just too large to get decent sized field for imaging. Even 6" version is F/9 and not F/8 because of this. If you really want 5" scope with long focal length - then why not Mak127? Too slow? F/12 is not too slow - at least not if you know how to treat your data. Don't think in terms of F/speed - think in terms of aperture at resolution. Say you want to image at 1.5"/px with 5" aperture. Many people use for example 4" APOs and image at higher resolution, and although APO will be F/5 or something like this - your scope at F/12 will be faster. Say you have 3.75um pixel size DSLR sensor. Mak127 has 1500mm of focal length. Pixel size would suggest that you will sample at 3.75*206.3/1500 = 0.515"/px - but why don't you bin your data x3 to get to 1.5"/px or maybe even x4 to get to 2"/px. Only thing that you need to worry about is single exposure - you need to make it long, like 5 or more minutes long - to overcome read noise. Alternatively, why don't you get 130PDS? That is also light weight imaging scope. It is also capable of similar resolutions - like 1.5"/px - 2"/px and you don't need to bin it.
  16. I don't think it works like that at all. All etalons should be capable of tuning as environmental factors can pull etalon of band (say atmospheric pressure or temperature expansion). Quarks operate the best with F/7-F/8 scopes because you need F/30 for etalon to work the best and they have matching x4.2 (or x4.3 - not sure) telecentric lens already integrated into device. You can use slower scope - and it won't affect operation of etalon - but you'll possibly have issues with exit pupil (like blackouts as our pupil tends to get really small in broad daylight). Quarks have dial that you use to put it on band / tune exact CWL depending on your conditions. Btw - you also need to tune exact CWL depending on features that you are observing as there is Doppler shift for things that are moving towards you.
  17. You can't directly combine airy disk and seeing FWHM as numbers. Let's go over that bit as well. Analytical solution to exact FWHM shape is non trivial since airy disk function is rather complex mathematically. It is therefore best to approximate all blurs with Gaussian distribution. When we say FWHM 2" for seeing - it simply means that it has Gaussian distribution with FWHM of 2" (although seeing at any given moment is far from Gaussian - due to central theorem, when you expose for enough time it averages out to a Gaussian shape - same happens with guide error). Convolution (that is the way these blurs combine) of a Gaussian with a Gaussian is a Gaussian and they combine so that their sigma / FWHM add in quadrature (square root of sum of squares). Sigma is related to FWHM by factor of ~2.355, so FWHM of 2" is the same thing as sigma 2/2.355 = ~0.85" Approximation of Airy disk with Gaussian is 0.42*lambda*f-ratio as opposed to 1.22 * lambda * f-ratio for position of first minima or radius of Airy disk. When you simplify everything down expression for sigma of Gaussian approximation to Airy disk is 47.65/aperture_size where aperture size is in mm Guide RMS should not be neglected as it is important bit (that is sigma of equivalent Gaussian). Either make everything FWHM or sigma. Example - 1.5" FWHM seeing, 200mm of aperture and 0.5" RMS guiding - what is resulting FWHM for diffraction limited optics? 1.5" FWHM seeing = ~0.637" RMS 0.5" RMS guiding = 0.5" RMS 200mm aperture = 47.65/200 = 0.23825" RMS Total rms = sqrt(0.637^2 + 0.5^2 + 0.23825^2) = sqrt(0.7125320625) = ~0.844 RMS = ~1.99" FWHM As far as what the seeing limited frequency looks like - well, you need to combine above three PSFs with convolution. Convolution theorem says that convolution in spatial domain is equivalent to multiplication in frequency domain. We need to see what sort of Fourier transform we get for each of these three types of filter and then multiply those. Gaussian is simple - Fourier transform of Gaussian is a Gaussian so filter looks like this: Just right side is interesting one (zero is centered on profile itself) - so filter starts slowly then falls rapidly then eases off to infinity at low values. Wider the blur in spatial domain - narrower the filter itself (and vice verse). Airy disk filter is actually MTF of a telescope: This is graph of MTF for two telescopes (one half the diameter of second). Both are unobstructed. Unobstructed telescope almost has straight MTF (it is just a bit curved). This graph clearly shows cut off frequency. Gaussian does not have that - and goes off to infinity. But telescope optics has it and it is well defined and depends on aperture size alone. https://en.wikipedia.org/wiki/Spatial_cutoff_frequency In above images we have seen some interesting combinations of F/ratio and pixel sizes - but from above link we can determine what sort of F/ratio we need for given pixel size: If we rearrange that formula and apply Nyquist theorem we get this: F/ratio = 2 * pixel_size / wavelength Where pixel_size and wavelength need to be in same units so either um or nm. Is Jupiter image properly sampled with ASI462mc camera? Optimum F/ratio = 2 * 2.9um / 0.4um = 14.5 So you need F/14.5 to capture down to 400nm with 2.9um camera. Image is slightly over sampled at F/15 - but difference is really negligible. Note that this is without any influence of atmosphere or tracking error as very short exposures and lucky imaging is utilized. For long exposure imaging we have to combine above graphs, and here is what it looks like: (here seeing is given by Fried parameter or coherence length - r0) Look at that bottom curve how it fast approaches zero and then stays there for a long time - that is due to Gaussian shape. Above telescope MTF is a bit more curved - that is because it has central obstruction In any case, from above graph, you can see rationale for FWHM/1.6 parameter I gave earlier. It is not place where all data is captured - it is place where almost all data that can be used is captured - rest is below noise level. I roughly outlined "zone" where this happens. If we convert that to numbers in that example - 0.6 cycles per arc second makes wavelength be 1/0.6 = 1.6667 arc second and we need to sample twice per this wavelength so about 0.833"/px in this case (it is below 1"/px - as no mount error was added in this case - but for most part - you won't get below 1"/px in realistic amateur conditions for long exposure).
  18. @ONIKKINEN Here is same sub but with offset added so that all values are positive (just added 0.0007 value to all pixels in ImageJ) Camelopardalis-IFN-background-removed.fits
  19. That is not what is intended nor what happened. I guess that software that you are using can't deal with negative values. It puts background at zero - since there is some noise in background - some values end up negative - but that is just math thing - you can put black point where you want - all data is still there. Here is it stretched in Gimp: Results are not much different than ASTAP and SIRIL and I guess it's down to the actual data - gradients are not due to simple light pollution, but maybe some issues with calibrations or perhaps something happened when stacking. If you want to investigate possibility of issue when stacking - then try to do just normal add/average stack and see if you can successfully wipe the background. Sigma reject algorithms can cause issues with linear gradients. Sum / average of images with linear gradients will produce linear gradients - even if gradients have different direction - result will still be linear gradient. Sigma reject can start to do funny things if subs are not normalized for gradient as well - in some places gradient will be "to the left" and in others "to the right" and in some subs it will be stronger on one side of the image than on other - and all those pixels will be seen as "deviant" as they don't follow same distribution as others (signal is actually different and it can be handled with regular sub normalization). I have another algorithm that I termed "Tip-Tilt sub normalization" that deals with this on sub level and aligns gradient to be the same in each sub thus creating "stack compatible" subs that work well with Sigma reject algorithms.
  20. At some point - level of detail will start to match. There are two things here at play: 1. sampling rate needed to capture some level of detail 2. level of detail that can be recorder at some sampling rate I know this will sound funny - but those are two distinct things. I'll try to make a diagram that will explain how and why this is so and what is really going on. Diagram will be in frequency domain, so maybe it is better if we consider it in 1D and think of it as sound (that will be more familiar to most?). Above is diagram of signal in frequency domain. On X axis we have frequency while Y represents "strength" of signal at that frequency. There is some highest frequency - I labeled it above Sampling rate - which is error really as according to Nyquist sampling theorem - sampling rate needs to be twice that frequency (that is x2 in Nyquist). Bur for simplicity let's call it just sampling rate / cut off frequency (although keep in mind that two are related by factor of x2). You need to sample at said sampling rate if you want to capture all there is, but all there is - might not be that "full" square I labeled. There could be more energy in those high frequencies. This is the reason Hubble reference image is that more detailed - it has energy in those higher frequencies compared to other image. It is so because Hubble has much larger aperture and its cutoff frequency is much more to the right. We artificially cut of that data when we resampled it to this sampling rate. Thing is - no matter what telescope, or sky conditions - above curve always looks roughly like that - it starts at 1 and then smoothly (above is not very smooth) falls to 0. It is just the place where it hits the zero that counts. It looks a bit like this. We have two curves above. One is from Hubble, and other is from 8" SCT. If we constrain them to the same region that I outlined. Two things happen: Given sampling rate will capture all the data from SCT, but at the same time - in that region, Hubble image will have much more energy at high frequencies - and will show sharper image. In the end, it is very important to understand one thing: When we sample at critical sampling rate - we will not have sharp image to start with, but mathematics say that you captured all the data in the box and you can fill those higher frequencies. How do we do that? This is what sharpening does to our images. Sharpening is restoring energy at those higher frequencies. You won't be able to sharpen above cut off frequency - even if you over sample - as energy at those frequencies is 0 - you can't restore something from zero (restoring in this context is multiplication / division - actual high frequencies are multiplied with value less then 1 and if we want to restore them we need to divide with same value - value depends on shape of the filter - or PSF). You need high SNR to be able to restore frequencies as noise is also spread over frequencies and boosting particular frequency also boosts noise at that frequency. Sharpening brings out the noise as well. This is how we can in planetary imaging sharpen things to more than can be seen at the eyepiece. Displaying a thing is one thing - telescopes display so small things without any issues. Think about it - even closest largest stars are fraction of mas (milliarcsecond) - yet we see them in telescope. What telescope and aperture does is simply "blows up" any small feature. You can zoom in to see airy disk of a star in telescope - does this means that star is the size of Airy disk? No - it just means that you can't resolve past that size. With imaging it is about how many samples do you need to capture the image. Not how tiny in reality is the thing you are looking at. In order to understand how many samples you need - you need to understand cut off frequency. Simple as that. All things that you mentioned (and some that you did not) add in particular way. 1. Airy disk 2. Seeing blur 3. Mount tracking performance (sort of motion blur created by mount "shaking" as it is tracking - because it is not tracking perfectly - in essence guide RMS error if you will) Those are three types of blur that are added successively to the image and resulting blur is combination of the three. Make any one of those smaller and resulting total blur will be smaller. But they don't add "normally" - they add "in quadrature" - which has interesting property (this is how linearly independent vectors add or how noise adds). If you add two values that are very different - result tends to be very close to larger component. Say you add 10+1 normally - you would get 11 as you expect, but if you add like linearly independent vectors (square root of sum of squares) - look what happens sqrt(10*10 +1*1) = sqrt(101) = 10.05 Only 0.05 difference between 10 and result! If one of the three components is very high - other two won't make much difference. If seeing is very poor - you don't really care if your guiding is poor or what is the size of your aperture, but if you are shooting in good seeing with good mount then aperture size actually makes a difference That is in part what we are seeing above - yes 8" will out resolve 5" in good seeing in long exposure astrophotography.
  21. To be honest, I have no idea if it is different, since I don't know implementation details of those algorithms you mentioned. My algorithm works on: 1) linear data 2) Almost has no user input (no selection points needed or things like that). It has 2 parameters which almost always stay default when I apply the algorithm It removes background light levels and any linear gradient that is due to light pollution. It does not deal (nor intent to deal) with: more complex background gradients that are formed by poor stacking or high altitude clouds or failed flat calibration. It is not background flattening algorithm - it expects proper data, calibrated and clear of artifacts. Here is the result of applying algorithm to that data: Camelopardalis-IFN-background-removed.fits It would be nice if you could do small side by side comparison to other methods you used - just a simple stretch after removing background. Not sure what you are asking, but here is how I classify things: 1. Regular sub calibration (darks, flats, that lot) 2. Stacking 3. Color calibration - this step is meant to transform (L)RGB data (irrespective if it's coming from mono or OSC) to XYZ color space (which can be thought of "standardized" sensor raw - but it is more than that as it is aligned with our visual system - Y corresponds to our perception of luminance). 4. Gradient removal. This step is actually "optional". In daytime camera making an image setup - closest thing to it is "red eye removal". It is altering of the data to account for well understood phenomena in order to more faithfully render the target that has been tampered by environmental factors (using flashlight and retina reflection for red eye, light pollution thing for astrophotography). Step 4 does not really depend on step 3 - you can remove gradients both pre and post color calibration. This is because light adds linearly and color space transform is also linear operation. Result of 3 then 4 will be the same as 4 then 3. 5. Stretch .... rest of it.
  22. @ONIKKINEN Here is result of "automatic background removal" on image that already featured in this thread - M101 by @Pitch Black Skies Left is image after removal, second is what algorithm identified as foreground and background (white and black) and third is removed gradient I guess extended nebulae filling the FOV would probably be issue for the algorithm - but I don't have any test data to run it against (I did run it against data provided by IKI observatory and it did fine in spite having not very well defined background in some cases).
  23. Biggest problem is trying to convey what max spatial frequency looks like and what is resolving power of optical system in "layman's" terms. Both star separation and details resolvable in extended fainter objects have one thing in common - that is PSF. We are in fact lucky because we have stars and we can treat them for all intents and purposes - as point sources. As such - their image by optical system is in fact - point spread function - as it describes how single point of light spreads under blur. PSF is all we need to know to know everything about the blur as blur is direct consequence of PSF. PSF acts on a single star, it acts on two stars and it act on extended object in the same way - as if it were (and it often is) composed out of countless stars next to each other - PSF acts on all of them in the same way (it "spreads" each "point" in the same manner - hence point spread function). Having said that - I strongly object to relating any sort of visual standard of resolution that is based on star separation - to pixel size. Like mentioning Dawes limit in context of imaging. But I don't mind the other direction at all. If you already know characteristics of PSF and you know that is its frequency response / what is optimum sampling rate - then it is easy to say if two stars will or will not be resolved. This is simply due to the way PSF and convolution work.
  24. No - using 23mm diagonal sensor and having no vignetting means that full illumination (different from imaging) circle is at least 23mm - vignetting can start at 12mm away from center (making diagonal 24mm - 1mm larger then what you checked). Diagonal equates to diameter not radius.
  25. Yes - exactly like that. Use diagonal instead of horizontal. With projection method you are using your eye to notice difference (error to about 7%) in brightness. With sensor you can be more precise. Only problem is - do you have big enough sensor. If your fully illuminated circle is larger then say 30mm and you have APS-C sensor - you won't capture vignetting drop off properly (~28mm diagonal depending on model).
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.