Jump to content

vlaiv

Members
  • Posts

    13,263
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. Indeed - 32mm plossl is very budget but very good eyepiece that will give you nice wide field views. I also agree that you should limit yourself to say x100 power because you have semi fast achromat (not as fast as F/5 but still rather fast at F/6.5). Above that magnification it will be quite colorful and depending on your eyesight - about x1 per mm of aperture is quite decent level of magnification. This means eyepiece in 6-7mm range. BSTs are nice budget eyepieces, but they have 5mm and 8mm. Out of the two - I'd probably go with 5mm as high power EP.
  2. I think that OSC vs Mono debate should be settled by asking following question: do you intend to do anything other than just pure color imaging? If you think of doing: - Narrow band - Spectroscopy - Photometry and really need mono - then yes, mono is logical choice. If you think solely in terms of imaging - then: Yes, mono will be faster - but only if you make it faster. In order for mono to actually be faster and produce deeper image for same imaging time - you need to be careful of how you shoot and process your data. Simple LRGB 1:1:1:1 vs normal OSC for same total imaging time - will make minimal distinction in quality. In fact - accurate color is easier to get with OSC camera than mono + interference filters. For mono to be significantly faster - you need to shoot more luminance and bin your color data and then compose data in particular way. I would not worry about sampling rate of either camera - both will over sample - but both are CMOS cameras and can be binned in software to match actual resolution. What you need to consider is: 1. Is your scope capable of illuminating APS-C sized sensor properly and 2. Is your scope corrected properly for APS-C sized sensor That of course depends on how good you want your images to be. If you don't mind imperfect stars in the corners and less SNR - if you crop your data anyway - then get APS-C sensor. Otherwise, I think that 4/3 is better match for newtonian scope up to 8".
  3. Not quite so. Take simple example: 5,5,4 and 6,5 average of 5,5,4 = 14/3 average of 6,5 = 11/2 Average of those two is: (14/3 + 11/2) / 2 = ((28 + 33) / 6) /2 = 61/12 = 5.08333 But if you add them together( 5+5+4+6+5)/5 = 5 It will work out the same only if you average same number of subs on each night, but if you have different number of subs on each night - you'll get different result if you stack the stacks versus stacking each sub.
  4. Dithering is beneficial regardless of type of calibration you are performing. It is beneficial for more than combating walking noise - it reduces noise in general. As far as calibration - try to do proper/correct calibration. Bias is not needed unless you want to scale darks or have other particular reason to use them. Use just darks, flats and flat darks for your calibration.
  5. You'll have quite a bit of issues using OAG then. Coma in newtonian telescope grows with distance from center of the frame. By the time it reaches edge of APS-C sized sensor (usually found in DSLR) it will be too large and star will be very deformed. Small level of deformation usually does not impact guiding - but large does as it lowers level of available light in bulk of the star - used for guiding. Look at this comparison: These are stars affected by coma, and here is version with good coma corrector: You won't be able to guide on most stars in upper image - but you should be able to guide on almost any star from lower one (they are well formed and software can tell the difference between star and background).
  6. I've done that test with couple of ZWO cameras (different models) and it seemed that published figures are always a bit optimistic. Not by much - for example, ASI1600 that has 1.7e at unity gain was measured to be closer to 1.8e and so on.
  7. I'd take published read noise results with a grain of salt. It is best to measure it yourself rather than rely on published data. It is rather simple procedure - take two bias subs, use e/ADU factor to convert pixel values to electrons instead of ADU, subtract two subs, measure standard deviation and then divide that value with square root of two (subtracting two subs has the same effect as adding them, so sum of two subs will have their noise added - but when subtracting, you'll cancel out bias signal).
  8. I have two concerns. First is back focus and use of coma corrector. Most coma correctors have working distance of 55mm, while Canon flange distance is 44mm. Usually people use EF/EF-S to T2 adapter that is 11mm of optical path and that gives them 55mm required by coma corrector. Ideally, you want OAG to be placed between CC and camera - and there is no room for that in standard setup. One solution is to use dedicated OAG for DSLR cameras that is T2 adapter and OAG in one unit: https://www.teleskop-express.de/shop/product_info.php/info/p2722_TS-Optics-Off-Axis-Guider-for-Canon-EOS-cameras---replaces-the-T-ring.html Second concern is how much light is going to be gathered by OAG prism. When used with fast systems, OAG needs to be placed very close to main imaging sensor to reduce distance. This is because of limited size of pick off prism. With DSLR, you have no choice but to place it at about 50mm away from sensor. In order to fully exploit F/5 system - you need pick off prism that is larger than 10mm. Most models are built with standard prism that is 8mm on it side. That means that OAG will operate at more than F/6 - probably at F/7 or so. Still pretty good performance and you should not have any issues guiding (I guide with OAG at F/8 since I use it at F/8 scope and it works fine - but my scope is 8" and gathers quite a bit of light). One way of improving OAG performance is to bin your guide camera. That will make it more sensitive. Since you'll be using 750mm of FL to guide on - binning x2 will still give you more than enough guide precision.
  9. Here is what I believe is wrong. Millisecond exposures help freeze the seeing. Atmosphere is turbulent and constantly changing. Depending on aperture size - there is small window in which it is for all intents and purposes "stationary". For amateur sized telescopes in better/good seeing it is about 5-6ms (maybe even up to 10ms on nights of very good/excellent seeing). Any longer than this and things start to average out - "motion blur" starts to happen on PSF. It is not coincidence that measure used to describe the seeing is "FWHM of two second integration". Two seconds is enough for things to average out (those millisecond changes) and provide steady seeing figure that won't change much between successive exposures. While DSO lucky imaging works - it is still limited by this averaging. In planetary imaging (if done properly and exposure length kept short) we don't have this average - in fact, we keep only very small subset of valid frames - and even with them - we can actually choose to stack part of frame. This is because seeing PSF varies strongly over even short distances (I think about 10 arc seconds away you can have entirely different PSF - one blurred enough not to be included in stack - while other perfectly "calm"). In 2 second exposure - we can expect that FWHM of all stars in the image will have very close value because all of them have been averaged out versus short exposures. I think this would be interesting place to put some real data in this discussion. I'm currently veryu busy - I'm finishing new house (finishing is rather strange word - better expression would be finishing just enough so I can move in ) and am about to move in next 2-3 weeks so can't be of much of help there - but I suspect you already have needed data? How about taking one of your lucky sessions that has 1-2 seconds long exposures and doing stats on FWHM? Mean value, standard deviation of measured FWHMs - maybe even a graph of how it changes over time? That would give people a good idea of what the can expect from lucky imaging, how much of frames can they expect to keep and to get the sense of FWHM values in general.
  10. Don't worry about that. I'm sorry if any of my comments sounded malicious in any way. Indeed, I was trying to point out some of the things people involved in the project need to be aware of in order for project to work to its full potential. I know lucky DSO imaging works. There is no question about it really. However, I'd like to point out that results that I've seen come from rather large telescopes. Take for example Emil's work: https://www.astrokraai.nl/viewimages.php?id=266&cd=7 There is simple explanation for this. On one hand - achieved resolution depends on aperture size. I'm not talking here about planetary critical sampling which is probably x3-4 higher than deep sky sampling. Even with DSO imaging - aperture is important for resolution. This is because different PSFs convolve image with compounding effects. One PSF is mount tracking / guiding performance. If you use short exposures, you'll be minimizing impact of that as there is less chance for mount to drift in short time. Other is seeing - that is addressed with selection of subs and choosing those where seeing impact is the least. Third is aperture. Here - you don't really have any options except to change the scope. It's safe to assume that people won't be doing that for purpose of this project, so they are stuck with PSF of scope that they have. 4" will have double blur of 8" scope due to aperture alone, which in turn will have half that of 16" scope. Better the seeing, or selecting best of the subs - more impact above will have on final resolution. For telescope sizes that we usually use in imaging - aperture has rather small effect compared to tracking and seeing, but once you minimize those two like in lucky imaging - then aperture suddenly becomes important. Second important point with large aperture telescopes is - once you set your working resolution / sampling rate, speed of your system depends solely on aperture size. This enables you to get large signal in short amount of time - beneficial both for accurate stacking and also to overcome read noise. In the end, here is interesting resource that people should examine: https://www.meteoblue.com/en/weather/outdoorsports/seeing/london_united-kingdom_2643743 This website gives forecast of seeing in arc seconds FWHM. It would be good to check forecast value against actual measured values. Care must be taken to subtract part that is due to aperture size. If forecast is reliable (and I think it is - at least from planetary imaging point of view and I don't see why it will be different here) - then it will be good indicator of results that can be expected. You can't really expect to get 2" FWHM if forecast shows something like this: and you are using 4" scope (which itself has ~1.12" FWHM contribution due to aperture). By the way, aperture FWHM can be calculated by using 2.355 * 0.42 * lambda / aperture expression (which will give you radians if you put same units for aperture and wavelength - you can put 550nm for wavelength), and you can combine two by adding them in quadrature: sqrt(FWHM_seeing^2 + FWHM_aperture^2) Here we omitting contribution due to mount and assume perfect aperture.
  11. Well, there seems to be some discrepancy in calculators used. First - don't set 90% QE as that is peak QE for your sensor and it won't have such QE over whole 400-700nm range. ~70% QE is better approximation, and maybe use even less because you did not account for system losses (depending on scope - you can have 12 or more glass/air surfaces - air spaced triplet will have 6, reducer/flattener will have at least 4 and UV/IR cut fillter will have 2. There is also camera cover window - we now counted 14 air glass surfaces - even with best coatings that transmit 99.5% of light that totals to 93.22% for just air / glass surfaces). Second - I prefer using x5 rather than x3.1622 (square root of 10) as factor between read noise and LP noise. With x3.1622 you'll get increase of ~5% noise increase over read noise free sub, but with x5 you'll get only 2% increase. Third - I ran above calculator and compared the result to my calculator and above seems to give x3.5 higher electron count for same parameters. Not sure why is that - possibly different source of mag 0 flux used?
  12. I use ImageJ and registration plugins - these can just transform (rotate / scale) or even export transformation matrix.
  13. Are you sure about this? Back of the envelope calculation suggests that you'll be getting about 0.4e/px/s from sky. In 5s exposure that means about 2e of background signal or about 1.41e of LP noise. Hardly swamping 1e of read noise. You need about 1 minute of exposure to truly swamp read noise (5:1 - LP to read).
  14. I'd use different approach - more you can do locally, the better. Distributed computing approach - like network of nodes. People allocate space on local node for target and put their files in designated space and anyone can at any time request stack of say 2" FWHM from anyone else on the network. Local nodes then examine subs, select them - do local stack and upload only one file - stack. Requester then stacks submitted data locally. At this point, I'd like to point out few other important things: Algorithm is needed for effective stacking of incompatible data. - Some of the data will contain diffraction spikes and some won't. Diffraction spikes will not be oriented the same in each data set (rotation of reflector OTA in rings) - Sensor QE and filter response curves will be different. Even luminance will be different. This makes color management next to impossible unless each contributor submits color calibration data along the regular data - or at least data that has been converted to common color space like XYZ. - Vastly different SNR. Simple weighted approach will not give optimum results. There is no single SNR per image - every pixel has its own SNR. Stacking algorithm needs to weigh each pixel value accordingly when stacking mismatched SNR data. - Different sampling rates (this is by far easiest thing to deal with - requester specifies FOV and sampling rate along with threshold FWHM and receives already plate solved and aligned data to be combined). Above is true for any collaboration project - regardless if it aims to provide high resolution data or not.
  15. Yes. They appear later in the year each year because they move small amount on their respective orbits as we do one full revolution around the sun. In fact - each successive night they appear earlier - but also move in the sky a bit. One is due to our orbit around the sun and other is due to their orbits. You can download and install free software called Stellarium - that is planetarium software that displays position of stars and constellations with respect to selected time and your location. It has clock that you can adjust and see how planets move in matter of days, months or years.
  16. Problem with such short exposures is amount of light captured. In order to properly stack data, we need at least a few stars in the image - more stars there is - better alignment we can get. Poor alignment reduces sharpness of the image. This again requires larger telescope as SNR achieved is determined by aperture at given working resolution (another reason why over sampling is bad as it reduces star SNR - light from star gets spread over too many pixels).
  17. Term magnification makes no sense when used in imaging context. Magnification means that you magnify something - with eyepiece and telescope - it is angles. Moon is half a degree when viewed with naked eye and if you use x100 magnification - it will look 50° wide (0.5° * 100). Take image of that same moon - how magnified is it? Well that depends on what sort of screen you are viewing that image and how far away you stand. Imaging is projection rather than magnification. You project image and turn angular size in physical size on sensor - and that in turn changes physical size of image on sensor into number of pixels. There are two measures of this projection - one is FOV - how much of the sky in angular units will fit onto sensor. Second is how many pixels covers wanted angular patch of the sky - arc seconds per pixel.
  18. The way I see it - couple of seconds of exposure is threshold for lucky type DSO imaging. Just a couple of seconds is needed for seeing to average out in particular time frame to get good indicator of seeing induced FWHM. Guiding works in tame scales of couple of seconds - so we can assume that most mounts will track that much without significant drift. Lucky DSO imaging is similar with planetary type lucky imaging - only in name and the fact that we rely on luck. With planetary type lucky imaging we hope that frozen PSF will not be severely distorted. With DSO lucky type we hope that averaged seeing PSF will have low enough FWHM. Seeing changes from minute to minute and constantly during the course of evening. This means that for example - it will be 1.6" FWHM on average with moments at 1.2" and also moments with 2". We keep those subs that have FWHM below certain threshold - but it is still averaged seeing effect and far from planetary resolution. There are algorithms that can deal with different types of optical aberrations - but that is probably outside of the scope of project such as this as it involves extensive modelling of telescope system (measuring optical aberrations over imaging field and then doing dynamic PSF deconvolution on subs).
  19. Please don't feel bad. Everything is fine. I'm looking forward to seeing the results of this collaboration.
  20. Trying to be helpful and meant no offense, but sure, no need to discuss that further if you don't feel like it.
  21. You should be able to produce accurate star color from only two filters with some math. Stars emit light that is similar to black body at same effective temperature. You only need good calibration for your filters and you should be able to produce accurate star color with only two of them. This is similar to B-V color index in astronomy that specifies stellar "color index" (do not confuse that with actual color that we see). Try wavelet sharpening on linear image - that should help you split stars that are at the edge of visual split.
  22. Indeed - few people manage it, but I suspect it is down to few things: 1. Optics. For that kind of resolution - namely 2.5" FWHM, you need good optics with enough of aperture. Using 80mm ED doublet and expecting 2.5" FWHM stars in OSC image is not really reasonable thing to do. Similarly using 6" Newtonian that is not properly collimated or CC spacing is not perfectly dialed in (or using CC that produces spherical aberration) is not going to do it. 2. Mount / guiding Most people don't have enough guide resolution to precisely measure their total RMS because they are using small 50mm class guide scopes. These are fine for about 1-1.5" RMS guiding, but you need sub 1" RMS for 2.5" FWHM stars. Mount must be able to do it and guide resolution must be able to measure it properly. 3. Seeing of course. It is only doable on a night of good seeing - like <1.5" FWHM seeing (two second exposure). Here is sort of a guideline: 150mm with 0.8" total RMS and 1.5" FWHM seeing will produce 2.5" FWHM stars if optics is diffraction limited As for sampling rate - well, you can choose any sampling rate you want, but if you aim for say 2.5" FWHM stars - then you don't need to sample below 1.5"/px as you'll be over sampled. Many people sample at higher resolutions than that - and that wastes SNR. There is simple relationship between the two (this is actually approximate relationship as we approximate PSF with Gaussian of certain FWHM) - sampling_rate = star_FWHM / 1.6 When someone is using say 0.86"/px - well if their stars are not 1.376" FWHM - then they are over sampling.
  23. So between about 3"/px and 1.5"/px as far as sampling rate goes? That should be easily attainable with 6" scope, good mount and decent seeing without lucky imaging.
  24. Probably best scope for planetary AP is very slow CC. StellaLyra one is rather fast. You really want something like F/15 or slower (I've seen F/24 models with tiny secondary), but these are very expensive instruments (not because of F/ratio - but simply because they are made by companies that produce expensive instruments). For example: https://cfftelescopes.eu/product/classic-cassegrain-250mm 8" F/8 newtonian with optimized secondary for planetary lunar will probably be better instrument - but very cumbersome and not easy to mount and use. Above StellaLyra 8" is really best choice / compromise in terms of cost, size, performance ...
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.