Jump to content

vlaiv

Members
  • Posts

    13,265
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. Seems that many people rely on this tool and given that its hosted / maintained by @FLO I think it would be wise to revisit validity of the information presented by it. I've been several times involved in discussions where people refer to above tool and either accept very flawed advice offered by the tool, or question otherwise sound setup as it is not in the "green zone" according to the tool. There are also several statements made on that page that are simply false and should be corrected.
  2. Thank you. I have slightly different view on this: If in practice you are not seeing results predicted by theory, then: - you are applying theory in the wrong way - you are applying theory outside its domain of validity - the least likely scenario - theory is wrong and should be corrected or replaced (how often does that happen?)
  3. Because of sensor size. Sensor size is equal to speed (if properly handled and paired with right optics). 8L is APS-C sized sensor - it has 442.764mm2 of surface while ASI294 is 4/3 sensor and has only 248.3mm2 - that is only what? 60% or so of the size. If you have idea of what sort of FOV you want to image - 8L can be paired with bigger scope (and hence bigger aperture that lets in more light) for same FOV.
  4. Depends what you mean by optimal. That is very decent combination that will give you good results. It can be considered low resolution / wide field setup that is very suitable for a range of targets - but not so much for other targets. Here is an example - if you attempt to image say M51 galaxy - it will be very small in the FOV: and if you zoom to 100% on the galaxy - it will look something like this: It will be small. Similarly - this is what will M13 look like when zoomed in to 100% - in the FOV it will be equally small: But objects like M31 - will be nicely framed So will be other larger objects - mostly nebulae.
  5. Hold on. I just had an idea - what about massive fly wheel and rail gun? With massive fly wheel we can build up and store great amount of energy slowly and (this is wild guess) - with super conducting coils we can extract that fairly quickly. No rotation needed, smaller volume to produce vacuum in - less moving parts, but essentially the same concept. Dig few hundred meters of launch tunnel into the ground and use magnets for some crazy acceleration thing and there you go.
  6. They bin as well as CCD. Difference is that with CCDs it happens before readout and with CMOS it happens after readout in software. Only difference between those two is when read noise happens and how much of it there will be (which you should consider in the light of above argument about sub duration). With CCDs - when you bin - you still get the same read noise - one "dose" of it per binned pixel. With CMOS - when you bin - each pixel already received one "dose" of read noise before binning and when you bin them that noise adds up and that is like having bigger pixel but also higher read noise camera. There is simple "law" to calculate resulting read noise. Say if you have CCD with 8e of read noise and 7µm pixel size and you bin that camera x2 - it will be like having 14µm pixel size camera with 8e of read noise If you have CMOS with say 2e of read noise and 4µm pixel size and you bin that x2 - it will be like having 8µm pixel size but with 4e of read noise this time. With CMOS sensors - equivalent read noise increases by bin factor - so if you bin 2x2 - you'll get double read noise, and if you bin 3x3 - you'll get triple the read noise. What does that mean in terms of sub duration - well nothing. If you set sub duration given read noise on unbinned data - then it will be sufficient for binned data as well, so you don't have to change anything and you can just bin your data in software.
  7. Ok, so here is the thing with shorter exposures and all of that. Ideal sub length depends on ratio of read noise to some other noise source in the system. Usually that is LP noise, since we have cooled cameras and thermal noise is very small. Targets are faint and target shot noise is also small. LP is by far biggest source of noise with most setups. For a given setup - difference between CCD and CMOS is that read noise - and that is smaller by factor of about x4-x5 between the two. CCDs were usually around 8e mark for read noise (some more, some less - like Sony sensors that could go down to 4-5e of read noise), while CMOS sensors often have 1.5-2e of read noise. This really means that CMOS sensors can utilize x4-x5 shorter exposures than CCD (all other things being equal). People used to image with 5-10minute exposures with CCDs - so direct comparison would be 1-2 minutes. That part is right. However - not all things are equal, and setups are different. - If you image in darker skies - you'll be better of with longer exposures since your LP levels are lower and LP noise is lower - If you image at higher resolution - you'll be better of with longer exposures - since that LP is spread over more pixels and each pixel gets less signal so associated noise is also lower This holds for both CCD and CMOS equally and has nothing to do with fact that CCDs have x4-x5 times higher read noise than CMOS. If you need to image for 5 minutes with your setup and CMOS (with say 1.5e of read noise) - then you'd need to image for 25 minutes per sub for CCD given same circumstances (resolution and LP levels).
  8. I do understand the question and here is rather simple answer: If you take 4 images and stack them - it improves SNR, right? It does so regardless if you add or average them. Difference in adding and averaging is just division by for as adding them is a+b+c+d and averaging them is (a+b+c+d)/4. SNR does not change when you divide whole thing with a constant - so if we have S/N then it is the same as (S/4) / (N/4) - as those fours cancel each other. Adding them is exactly the same thing as taking one exposure that is x4 as long (think pouring water into bucket - it does not matter if you pour in one bucket for 4 minutes or you take 4 buckets and pour for 1 minute and then add all that water up). Same thing happens with adjacent pixels (they have to be adjacent so that signal is almost the same). Binning is just adding pixel values up - and adding 4 pixel values is like either integrating for x4 longer with same pixel size - or having pixel that is x4 larger and integrating the same. In either case - SNR for that pixel goes up same way as with stacking, and SNR per unit time really represents "sensitivity". Here is another way to look at it - more signal - better SNR. Adding 4 pixel values puts all that signal into "single bucket" - more signal - better SNR (or simply - pixels are more sensitive as they gathered more signal in same amount of time).
  9. That is a tough one for me to answer Ideally you want to bin x2 - but with OSC consideration. Not sure what software does that, so you can do following which will be sort of equivalent: bin x4 your linear stack and then enlarge it by upsampling x2. That is closest operation to binning non debayered data in a special way and then debayering to get the color.
  10. Large backlash can also act as elasticity in setup, and balance that is slightly off can compensate that by keeping gears "pressed" to one side so that they don't swing side to side (graph all over the place). Perfect balance is only good if you have minimal backlash. That really should not happen. You should really need considerable force to move scope with clutches engaged (if it could be moved at all). Mount probably needs a bit of mechanical tuning - both for backlash and in general.
  11. Would you be happy with 10 x 5 minute sub from your old camera if it produce image like above? Using x2 smaller pixels - 3.75µm vs 7.8µm (actually by surface that is x4.32 so a bit more than x4) on same scope is like effectively shooting for 1/4 of the time. If you want to match the performance of CCD camera (or surpass it) - then use the same resolution you did before - bin your pixels to match the size of old ones!
  12. I think that we see things a bit differently This post then goes to explain: All of which is right - SA and Sky Guider Pro are indeed star trackers and not mounts. For 500 euros you really can't get even EQ3-2 with goto. Most people associate AP mount with goto capability as it so much simplifies things. Out of all other posts in that thread - only one person forced Heq5 due to their experience - which they emphasized: I added bold part. All other were putting significant effort to present possible alternatives to OP.
  13. I think this could be mechanical issue rather than mount. If it takes long time to settle - it is a bit like needing a long time to dampen vibrations after touching the focuser. I'd look at scope mount connection and how much weight is positioned at far end of the OTA. How big is your dove tail and where are your rings? What is your clamp like? Maybe look into upgrading clamp for a longer one? What is the weight of your camera and other bits and do you have guide scope attached to the finder shoe or something like that?
  14. I know what you are saying, but I still maintain that this is really not the case with heq5 and imaging. I similarly react when people say things like "it is only solution" when it is not, but I think, or at least - that is how I remember things, is that HEQ5 was always recommended in sensible way - as being minimum given peoples wants / needs / expectations. For this reason, I'd like to see an example of such post where Heq5 was blindly recommended as solely option for a beginner. I'm not expecting anyone to go looking for it, but I was hoping that maybe OP could provide one - or at least some of the people that often encounter such posts.
  15. Well, let's compare the two with that regard. Your old CCD: Peak QE seems to be around 55% Your new CMOS this being relative QE graph and peak being estimated at 75%. (75 - 55) / 55 = 36.4% peak performance improvement. Ha line is about 30% absolute QE for CCD while it is at (0.9 * 0.75 = 0.675) 67.5% with CMOS (67.5 - 30) / 30 = 125% Ha performance improvement in sensitivity I'd say that above statement is correct - it is much more sensitive camera as far as QE goes (mono version probably more).
  16. Out of interest - can we like link here to a post or two at a topic where it was said that HEQ5 is only way to go and that nothing else works for a beginner?
  17. Something is not right with this mount / setup. Did you forget to tighten DEC clutch? It almost looks like mount is not responding to DEC corrections. DEC should be fairly stationary on its own if everything is setup right. Although it looks like backlash - there is simply too much of it to be real backlash. I mean - look at this: That is 15" P2P DEC excursion over 4 minutes that is not corrected properly with like bunch of pulses. Cable snag? Strong wind? Very poor balance? Not even sure what to recommend as a first step. I'd say - check mechanics of the mount and scope. Check balance, check if anything is loose to the touch - including guide scope (maybe this is just massive differential flexure in DEC?) and camera to guide scope connection.
  18. @wuthton Are those "/px that people use or what people can achieve? I'm yet to see person actually achieve 1"/px, even 1.5"/px is going to be very challenging and needs at least 6-8" of aperture. I think that you need to "shift" your list one place: 1"/px - almost unachievable on anything but a premium mount 1.5"/px - 2.5"/px HEQ5 / EQ6 / CEM70 class mount 2.5"/px - 3.5"/px EQ5 and so on ... When you get to lens - you are no longer limited by mount (at least from this list) - you are limited by lens itself. Lenses are simply not diffraction limited and have much larger spot diagrams (just look at MTF of a lens - they show it for 10 to 30lpmm - that is equivalent of 50 - 17µm pixel size and modern cameras have about x5 smaller pixels than that upper limit).
  19. Well, interesting thing about mounts and tracking is that tracking error in pixels decreases with declination. It takes Polaris 24h to move 4° in pixel space (it is about 38.5 arc minutes from NCP and circumference of the circle is 2*r*pi so that is about 240 arc minutes or 4°). If you image at 8"/px - than that is about 450px of motion in 24h or 18.75px per hour, or 1px in 3 minutes. Even if your mount does not move at all - at that declination and that resolution stars will stay mostly round in 3 minute exposures. In order to really test your mount - you need to track at meridian and then see how well it behaves (same goes for guiding).
  20. I thought that rail gun was better option for shooting things into space?
  21. Planetary imaging and DSO imaging are quite different things. If you want to first get into planetary imaging - well, you can do that on a very small budget. I've taken this image: With extremely limited budget. Scope in question was 130/900 Newtonian on Eq2 mount that was tracking with simple DC motor (not stepper). Camera used was Logitech c270 web camera - modified by removing front lens and added piece of PVC pipe to act as nose piece. I agree with above suggestion - first, give us idea of what sort of equipment you already have and what is your budget so we can advise accordingly.
  22. I think that slow climbing is also better than fast one? Air resistance grows like power or something (if I'm not mistaken) From wiki:
  23. Neat idea! I can see it developing into Olympic sport!
  24. Since you seem to be into languages - here is something that might help you grasp it a bit better. You know how some phrases can't really be translated to other languages and can only be understood "in the spirit of the language"? Physics is a bit like that - trying to translate something to common sense just fails - because they are two different languages. Some physics things you can only understand if you don't try to put them into common sense speak and just look at them in "science / physics" language. Here is example - you seem to have a problem with photon having moment - that is because you associate momentum with moving mass as was thought in Newtonian mechanics and "easily associated with say - hitting a football and that same football smacking someone on the nose " kind of reasoning (age appropriate ). You have a concept of mass being something solid - something that you can touch. Thing is - nothing really touches there. When you sit at your chair - none of your atoms is really "touching" chairs atoms. It is EM force that repels you and photons are carriers of that force and they exchange momentum between you and chair - and that keeps you from falling thru the chair (if you look at sizes of nuclei and distances - it's mostly empty space and we should really just drop straight down thru the chair because of gravity). All along - it has been photons. We just don't sense it that way so our common sense is based on this false image our senses present to us. Totally different language than the language of what is really happening.
  25. On the contrary - I'm saying you should do all your processing in 32bit per channel - float point type. Only time when files "are allowed" to be in 16bit fixed point / integer - is at the time of the capture. As soon as you start calibrating them (as a first step of processing workflow) - I advocate use of 32bit float point precision so one does not needlessly increase noise with rounding due to use of fixed point format and low bit count. Actually - it is not. It is completely the same Well - depends on how one looks at imaging and processing. Some people learn by example - like a skill, while others have method of learning by association and deeper understanding. First approach is much easier / comes natural - that is how we learn to ride a bike and similar things. Problem with that approach is that if circumstances change - things that we have learned seem not to work any more. For people that learn with second method (or rather - when person learns something with second method - I don't think there is clear distinction and we all learn some things with either methods) - nothing really changed except "problem setup / constraints". Solving of the problem still follows the same rules although parameters are different. In that sense - for me, imaging is the same - calibration files do the same thing - remove unwanted signal. Stacking does the same thing. It is same choice of exposure length depending on read noise and other noise sources. Gain is there to represent conversion factor between electrons and ADUs and so on.... Once image is stacked - can we really tell if it came from CCD or CMOS? It is just data - it can be properly handled or improperly. If it is properly handled, then again - processing can be done the same on it regardless of the source. It is "form of the data" that dictates how will it be processed and not where the data came from. You should. Fact that something is free - does not mean it is "cheap" or less capable. You can always spend some money on it if you find it useful - there is almost always a donate button with such open source projects. I guess difference between free and paid software in that regard is - with paid software you pay amount that developers think software is worth to you and with free software you can pay amount you think the software is forth worth to you
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.