Jump to content

vlaiv

Members
  • Posts

    13,263
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. Hm, I think I have a solution for your problem. At least - that is what I would do now. I have ASI1600 which I got when it just came out (I have V2 model rather than later pro) but if this sensor was available back then - I'd go for it instead. ASI294mm-pro It has better QE than ASI1600 by quite a large margin: It has 2.3µm native pixel size although camera first appeared like 4.63µm pixel size camera. This is because it was "automatically" binned 2x2 by design, but in later model they decided to unlock small pixel size. This is actually very good thing as you can now image at 2.3µm pixel size and then software bin it either 2x2 or 3x3 - depending on how you judge the quality of your sky on particular night. With ~700mm of FL that will give you 1.36"/px (still a bit on high side but better than 1.12"/px) and ~2"/px which I think you'll use most of the time. Sensor is a bit larger than ASI1600 - ~23mm vs ~21mm and I think that it does not suffer from micro lensing issue with narrowband filters. ASI1600 does suffer from that on bright stars in some configurations. It depends on actual setup and optical elements involved - sometimes it is stronger and sometimes it is weaker. Many people don't like it. Here it is on OIII data from my RC8" (note that this was taken at 1"/px - and it is oversampled although it was very good night):
  2. I probably did not express myself well, so I'll expand on your example. Say you are imaging with ASI1600 and 700mm focal length. You'll be sampling at 1.12"/px - which is very high resolution to work with. In fact - I would say that 1"/px is upper limit of what amateurs can achieve. One can certainly image at higher sampling rates (resolutions - btw, word resolution is used in so many contexts that it is sometimes confusing) - and many people do, but they are oversampling at those sampling rates - their effective pixel size is smaller than it needs to be in order to capture all the detail available and they are settling for lower SNR because of that. I would advise against going for such high sampling rate for number of reasons: 1. already mentioned over sampling - you'll need very steady skies and excellent mount performance in order to achieve detail that requires 1.12"/px (which equates to ~1.8" FWHM stars - seeing alone is often higher than that at 2" FWHM not to mention guiding error and telescope PSF that adds onto that) 2. 4656px x 1.12" will give you about a degree and a half. There are quite a few interesting narrow band targets that are larger than FOV provided at 700mm of focal length - but there are quite a few that are just right. Maybe you'll be better served with 1.5"/px for example? I'm rather happy with 2"/px for wider field imaging. For 1.5"/px you'll need around 500mm of focal length. Alternative to going shorter focal length would be to bin your pixels. This way you can get 1.5"/px from 1000mm of FL with use of 2x2 bin. Cmos cameras bin in software - which means you can bin after you finish imaging and calibrate your subs. Downside to this is that FOV is left unchanged - you'll get small FOV at 1000mm. You could use 700mm scope and bin that 2x2 - that would give you better FOV than 1000mm (not as good as 500mm) but 2.24"/px - which is well suited sampling rate for NB imaging of larger targets. Down side is that you no longer produce 4656 x 3520 but rather 2328 x 1780 subs (if that is important to you - like for printing). Hope this is helpful. Bottom line is - aim for 1.5"/px - 2"/px regardless if you opt for shorter FL or you decide to bin your data.
  3. What will be your working resolution, do you have any idea?
  4. Now that you have mentioned that ES refractor - there are couple of fast refractors that could possibly be of suitable quality to allow for decent NB imaging. ES is one example. I've read good things about this one as well: https://www.teleskop-express.de/shop/product_info.php/info/p2229_TS-Optics-6--f-5-9-Refractor---2-5--R-P-Focuser---Ohara--Japan--Objective.html However, I'm not sure how well they would do on NB imaging. For visual they are quite good and do show some chromatic aberration - but they are also very sharp scopes according to reports. This second TS scope has excellent focuser as well. SW120 ED + matching flattener is going to be excellent scope for NB imaging and very good match for ASI1600
  5. Could not agree more - 0.4" RMS is just within reach - you need to get out and spend more time under stars looking at those PHD2 graphs - only way to improve one's guiding
  6. Not sure what 1/30th wave refers to (surface or wavefront) - but your reflected wavefront is 1/26th PV wave which makes surface quality twice that - with flat mirror reflected wave will have twice difference between peak and valley - since light travels that distance twice - once on the way "in" and once on the way "out". You can see that from top left diagram where it says PV 0.038 and reciprocal of that is ~26.32, so it is 1/26.32 PV wave. Paper does not say what wavelength of light was used for measurement, so I would assume above 600nm. Surface roughness in nanometers depends on wavelength of light as one wave can be 600nm or 400nm - depending on light used to measure (and it is customary to include that in report).
  7. I agree that this method is useful for dust shadows only and not particularly useful for reflections. It can be modified to be useful for reflections as well - but I suspect that needed data is not going to be available to most people - or they can get confused about it. If we knew exact positions of all elements - we could do a brute force search to find which two would provide exact distance, but I don't think that people will know needed distances.
  8. Yes, I understand that, but suppose you have sensor with sensor cover window, chamber window, filter and field flattener. All of those are reflective surfaces and all of those are fairly close to each other. Which two produced the reflection?
  9. While you can shoot galaxies with something like ASI120mc - you don't really want to do that. If you are considering ASI120 because of budget constraints - look at used Canon DSLR instead. When I was beginning I had QHY5LIIc - which has the same sensor as ASI120mc I think and here are a few images of galaxies that I did with that: M82 Cigar: C30 / NGC7331: M51 - this was taken with an upgrade camera ASI185mc - a bit larger sensor, more sensitive and less noise: But I don't think any of these were taken with a small scope - they were taken with 8" newtonian.
  10. How does it work on reflections? In order for unfocused light to reach the sensor - you need two reflections - one reflecting light back and another reflecting it again towards the sensor. Using size of reflection on sensor can only give you total additional light travel but it can't tell you how far away reflection surfaces are.
  11. Fair point. I would not normally use settings circles for finder - but for educational purposes, why not.
  12. You'll probably find it difficult to accurately use settings circles on 8" F/6 scope. True field of view is simply too narrow. Maximum TFOV that you can get out of this scope is 2.2 degrees and you need 2" eyepiece with 46mm field stop in order to do so. I doubt that phone based compass is accurate to 1° and settings circles are going to be difficult to use to that precision (you need very large settings circle in order to make room for 1° marks on it. Whole circle has 360° and if dob base is 50cm in diameter - that is ~157cm in circumference. Single degree will be only 4mm wide and you need to see that accurately in the dark. If you use simpler eyepiece like 1.25" 32mm plossl - you'll be limited to only ~1.3° TFOV. Miss by 0.65° and target is no longer in the eyepiece.
  13. astrometry.net is open source and can be installed locally https://astrometry.net/use.html
  14. Not sure if this is correct in general - it is only correct for stationary states - for example energy (conservation of energy), but if you measure position of electron (and thus "prepare it" at certain position) - evolution in time will spread its position all over the place. As for the rest - again we agree in what you've said - but you don't address what I've asked. Let me try again - this time with dual slit experiment. Imagine we have vacuum case - electrons are shot at screen and detected on screen. Let's examine individual "hit" of an electron. Just before hit - electron's wavefunction is such that it can be detected on any of the fringes - but it will be detected on just one. I'm saying - what if position on the screen depends on QM state of screen. QM state of electron is the same so it can't be used to determine where it will land. We can now say: 1) either electron randomly interacts with screen at one particular place 2) electron deterministically interacts with screen at one particular place because that place was "the most suitable" for combined evolution of QM states of electron and screen. QM state of electron is rather simple and can be described fairly easily. QM state of screen is so complex that there is no easy way to record it / know it. This complexity hides determinism in the process and place selection seems random - but it is not. It is decoherence of electron QM on complex state of screen that flows deterministically.
  15. We agree on almost everything except measurement part (root of all evil in QM? ) I'll try to explain what I mean and if you can, you'll provide an answer to the question I'll be posing if you know one. I don't know the answer to it - I just suspect it - so it is not classical disagreement in the sense that I'm firmly standing behind what I'm saying. I agree that if we say prepare electron in up+down state and measure it - it will randomly 50% of the time be detected as up and 50% of the time be detected as down. There is no arguing with that. What I want to know if: 1) process is truly random 2) if answer to 1) is no if it contradicts Bell's theorem Now I shall proceed to explain how it can be deterministic. If we take a particle that is in superposition of states and we let it interact with another single particle - we can easily evolve their state in deterministic way. Only when a particle interacts with complex system that is in thermal chaos that we see raise to "random" - I'll use that expression but don't subscribe to associated interpretation - wavefunction collapse. What if in reality following happens: 1/2 * |up> + 1/2 * |down> evolves in deterministic way by entangling itself with the environment on so many levels that we simply can't calculate to produce one of two resulting states: a * |up> + b * |down> or c * |up> + d * |down> where either a >> b and d >> c which one will depend on configuration of macroscopic instrument particles at the time of measurement In fact I believe that decoherence is actually telling us that entanglement with environment puts things out of phase so there no longer can be superposition - I'm just extending this to say what if it also gives priority to certain coefficient. This would both explain why we perceive things as random - too much information so it is in essence pseudorandom, and why there is notion of wavefunction collapse - because there will be definite QM state that is "aligned" to eigenstate almost 100%. I believe that above does not contradict Bell's theorem as it does not contain hidden variables - it is still all the same QM except too complex when macroscopic systems are involved and hence appears random.
  16. Outcome of the measurements can only be given as probabilities but that does not mean that quantum state that is in superposition of states - say electron that is in superposition of spin up and spin down is not in exact quantum state. There is nothing probabilistic about that state. Being in superposition of spin up and spin down - does not mean "maybe it's spin up and maybe it's spin down" nor does it mean "spin state is not determined / it is "fuzzy" until we measure it". It means that electron is in exactly that state - being linear combination of two eigenstates a*|up> + b*|down>. It is due to decoherence that above state evolves into either up or down state once measurement is made. If I'm not mistaken - Bell's inequality does not imply that such evolution can't be deterministic. It implies that there is measurable difference between electron being in a*|up> + b*|down> rather than being in up or down state with us not knowing which one until we measure. I've always found glove explanation with pair of entangled electrons to be good representation of all of this. "Classical" thinking would be - after we separate two electrons - one of them is carrying left glove and other right glove but we simply can't tell which one has which until we do actual measurement - and then, lo and behold - we get anticorrelated measurement on them and it does not really matter which one is made first - in all reference frames it will make sense. This is hidden variable part - electron has hidden glove handedness from us until we measure it (no way of determining it apart from directly measuring it). Bell's inequality just says, no, actual state of both electrons is spin up and spin down at the same time and it will not "materialize" into being either up or down until we do the measurement and there is actual numerical difference between in prediction between these two reasonings.
  17. Why do you think that Bell inequalities are violated? I think that Bell inequalities are compatible with deterministic universe. They just state that simple logic does not work for QM phenomena. To us, following statements are nonsense: There is a ball in the box and at the same time there are no balls in the box. Or this one: I'm moving forward with respect to you and at the same time I'm moving backward with respect to you. This does not mean indeterminacy - like it is not determined what is happening and both things are possibility - it means that it is precisely determined what is happening - although incomprehensible to us - how can something move both forward and backward at the same time or be in the box and not be in the box at the same time.
  18. Get at least ED doublet for NB imaging. There are a few reasons why you would want to avoid achromat scopes for this: 1. There is not much of a price difference between good achromat with all the extras and ED doublet. If you try to image with say F/5 wide field achromat - first thing that you'll notice is poor focuser. That will cause tilt issues and poor corner stars. If you calculate in price of focuser upgrade - you'll see how quickly you'll be reaching ED doublet prices. 2. Although you are right about basic principle of operation for narrow band imaging and that you could even use well figured singlet lens (btw, proper alignment routine has no trouble dealing with scaling - different FLs), problem is that optical performance of mass produced doublet scopes is not quite good. They are produced to be fast wide field visual scopes. There is significant amount of spherochromatism and often astigmatism. I owned ST102 - that is Skywatcher 102mm F/5 wide field scope and it showed quite a bit of astigmatism in red part of spectrum (eggy red component). In overall price of the setup, scope really does not take up much of the budget. Getting ED doublet with good reputation simply makes more sense. Say you want to get 100mm scope for imaging. You have couple of options: https://www.firstlightoptics.com/startravel/skywatcher-startravel-102t-ota.html for about £200 (currently out of stock) versus something like this: https://www.altairastro.com/starwave-ascent-102ed-f7-refractor-telescope-geared-focuser-468-p.asp (It is ED doublet with FPL-51 glass and shows some color even in visual) that costs £495. Second scope has better fit and finish, dual speed 2.5" R&P focuser that is rather decent, retractable dew shield for ease of transport, etc and costs only £300 more which will be less than 10% in overall budget (scope, mount, mono camera, filter wheel and NB filters). Fitting dual speed focuser on first scope will set you back £170 (https://www.firstlightoptics.com/skywatcher-focusers/dual-speed-2-crayford-focuser-for-sky-watcher-refractors.html), and that makes price difference even smaller.
  19. Depends on the sensor you are planing on using. For smaller sensor - it actually makes sense to use focal reducer on RC8. Most currently available cameras have small enough pixels that you'll have to bin in order to get to good sampling resolution with such long FL scope, so reducer will not help you there, but it will help you make the most use of telescope. RC8 has about 30mm of usable field. I think it is a bit less - maybe APS-C size, so say about 28mm - but in either case it is both big and small - depending on how you look at it. It is much larger than 4/3" or smaller sensors but not big enough to cover full frame sensor. Putting camera like ASI183 that has 16mm of diagonal is sort of waste of good field of telescope. Even ASI1600 / ASI294 that have diagonals of about 22-23mm are smaller than usable field of RC8. In all of these cases - it makes sense to add focal reducer to exploit whole usable field, and not only that - camera with smaller sensor + focal reducer costs less than alternative larger sensor dedicated astro camera. Take for example ASI294mc which has 23.1mm diagonal and costs £980 and expensive reducer like SharpStar 2.5" x0.8 reducer for RC scopes which costs £400 - you'll cover the same amount of sky 23.1 / 0.8 = 28.88mm as APS-C sized sensor of ASI2600MC that costs £1900 or say something older like ASI071 that costs £1440. Not to mention that you can get CCDT67 or CCD47 to do the same job for half the price of SharpStar reducer.
  20. It's not term that is generally used for this purpose - but it is term from signal processing in general. It refers to voltage levels - and stands for direct current. If signal is encoded in voltage - say sound signal in analog circuits, then sound itself is not changed if we change reference point or we add direct current voltage to it (as long as it is not AC - or alternating current or some sort of varying voltage) - it does not matter. Sometimes it is used in FT of image - there is a single bright dot in the image and it represents infinite wavelength component - or DC component of FT. For some reason I adopted that term to mean - offset but since offset has exact meaning of const value added to each pixel by sensor and I'm talking of any offset in the image - any const value added by say LP or in this case - unknown cause, I use term DC offset (that DC term in my head means - applied to each pixel in the image ). If you want to try this in PI - after you create master flat (stack flats and subtract master flat dark) - use pixel math to add 25000ADU to each pixel in the image. As for exposure length - I guess that depends on sensor? I happily use millisecond flat exposures on my ASI1600 and don't have issues with those (lum flats for example are just couple of milliseconds long due to my flat panel strength).
  21. I advocate ASCOM drivers for two reasons: 1. I noticed long time ago that darks taken with ASCOM driver and ones taken with native driver are not compatible. I had issues with calibration of subs taken with native drivers and at the time - there was no way to set offset with native drivers. I was using SharpCap at the time. 2. ASCOM drivers are specifically designed with long exposure in mind and I concluded that it is the safest bet to use those In any case - if you prefer native drivers or want to continue to use auto flats aid - by all means experiment with settings to find which ones work for you.
  22. Good point, forgot about visual, yep "do-it-almost-all" scope then
  23. Actually, there is simple way to turn 6" F/4 scope into 2" F/4 "lens" for wide field. In fact any scope can be turned into shorter FL scope of the same speed where shorter focal lengths are "harmonics" of original FL - like 1/2, 1/3, 1/4, etc ... with this trick. Recipe is as follows: - take mosaic with number of panels being equal to square root of FL reduction that you want, ie for 1/3 of FL - make 3x3 mosaic - spend on each panel reciprocal of reduction squared time - or in example case - 1/9th of time on each panel - bin each panel by reduction factor - in our example 3x3 You will get the same result as if using 1/3 focal length instrument of the same speed. Here is reasoning: You spend the same amount of time as you would with smaller instrument. You use same camera (same pixel size) so SNR of say F/4 and F/4 is the same. Each panel will have lower SNR by factor of N because we spent 1/N^2 less time on it, but if we bin NxN - we increase SNR by factor of N - so each resulting panel has exactly the same SNR as other instrument. We reduced FL by factor of N, but we binned pixels by factor of N - so sampling rate / resolution stays the same as with smaller instrument (we get same number of pixels in final image). Only difference is being the space lost on overlap. In another words - with some skill and software support (plate solving, mosaic making, stacking that supports panel stitching) C6 with x0.4 can also do wide field shots.
  24. I'm not sure I can help with that. I don't remember what sort of bearings I used. I just asked for the best for given application and got some SKF that I used to replace existing ones. I'm happy with those so far.
  25. I'm really looking forward to see what sort of results you get from it with ASI183. I think that it could really be solution to old problem - "scope that can do it all". I mean 5" or 6" Cat and this reducer. Such scope would be good for visual DSO and planetary as well as imaging DSO and planetary.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.