Jump to content

vlaiv

Members
  • Posts

    13,265
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. Skeptical or dismissive of what exactly? I don't think that anyone here is either skeptical or dismissive of the fact that people see things that they can't explain at that particular moment (or for that matter things that escape explanation up until present day). Yes, such things happen - you see something that you can't explain. Some lights, something in the sky. Who has seen actual alien? I'm yet to hear of credible source that claims to have seen actual alien.
  2. Yes, I'd say it is indeed mount related. Mount errors broadly fall into two categories - those that affect RA axis motion and those that affect DEC axis motion. If your mount is solid and telescope / mount connection is solid and there are no externally induced vibrations (like walking around the telescope while it sits on a flexible surface like wood flooring or similar - or perhaps heavy traffic near by or whatever) then DEC error will be more or less linear for duration of the frame - nice drift in a straight line. However, RA error will be "circular" or rather very periodic. This periodic error does not need to be, and most often is not perfect circle - sometimes RA axis lags and sometime it trails true position and does this in very "wavy" way. Combination of the two thus result in semi circular features (RA moves back and forth while DEC always in one direction only). Examine this animation made out of single exposures on HEQ5 mount: Motion from right to left is DEC motion while up down motion is due to RA periodic error. You can even see how stars get distorted in some frames because speed at which mount errors compared to true position is enough to accumulate over the course of single frame. By the way, if you have set of images - you can do the same to confirm motion of the mount - just inspect images one after another without doing alignment of images you would normally do for stacking.
  3. It is enough to plate solve properly calibrated image. In order to establish SQM reading you need to do the following: 1. Count number of photons per unit area of the sky (arc second squared) 2. Compare that with known quantity that represents 0 on magnitude scale When you have plate solved image - you know how many pixels there is in one arc second squared - because you for example have 1.5"/px. That is linear measure not area - in order to have area you need to calculate area of a pixel (which is one - one side x one side) 1.5" / px * 1.5" /px => 2.25 "2 per pixel2 From that you can say - let's measure ADUs per pixel in the background and say you have 29ADU per pixel Now you can get ADU per arc second squared - because you know that you have 2.25 arc second squared per one pixel - so ADU per arc second squared will be 29ADU / 2.25 = 12.888... You might think it is a problem that you don't have actual photons - but it really isn't a problem because magnitude is log of a ratio between two quantities. It does not matter if quantities are expressed in pounds or kilograms - object A will be x4 as heavy as object B regardless of the unit of measure you use (but you must use the same units on both objects). So in order to get magnitude you do the following - you select a star in your plate solved image. Since image is plate solved - you have coordinates of that star and you can look it up in star catalog and get its magnitude. You can then also get star's flux by measuring it from the image (just make sure you don't use clipped star) - this flux will again be in ADUs - which is handy since you can now compare those two quantities. You calculate the ratio between star ADUs and background ADUs and convert that in magnitudes and you add that magnitude to magnitude of the star to get magnitude of the background sky. It does not matter if you've used some sort of filter to do this because your reference in catalog of stars will be V magnitude (that is what we'll use) - and ratio of intensity for one filter will be roughly the same as ratio of intensity for different filter (they won't be the same, but if spectral characteristics of both sources are fairly similar - difference will be small). In any case - to avoid bias - you can use average of many stars in the image - thus side stepping any spectral bias that might arise from using overly cold / red or overly hot / blue star.
  4. Image is plate solved and thus plate scale (arc seconds per pixel) is known. That is all you need to calculate magnitude per arc second squared (that and comparison value - like star flux in the image compared to star magnitude)
  5. Problem with newtonians is size and issues that come from speed. You'll be hard pressed to find imaging newtonian in sub 500mm focal length, while that is quite easy thing to do with refractors. As soon as you try to do something like 4" newtonian for imaging - you get crazy large central obstruction in terms of percentage - and you still have to couple that with 1.25" focuser instead of 2" one - because of draw tube size and issues with protrusion into light path as well as again - field illumination and size of secondary mirror. 5" is minimum usable and no wonder 130PDS is such a popular imaging scope. It is still over 600mm in FL (and does have issues with focuser draw tube). Speed brings in whole set of additional issues - like need for multi element coma correctors that often don't have enough back focus for all the things one would like to use (say filter wheel and oag) and sensitivity to tilt.
  6. I still would not image with such scope and yellow filter without aperture mask. Residual chromatic aberration is still present with fast scopes, even with yellow filter - but there is also issue of spherochromatism. Only one wavelength will be free of spherical aberration - others will have more or less of it and with fast achromats - it is usually more Take a look at spherical assessment of ST102 F/5 achromat: Green is almost free of spherical - but other two colors suffer from under / over correction In the end - it is a tradeoff - one can image even with fast achromatic scope - if one accepts to: 1. use filter to remove part of spectrum thus loosing some of the light 2. use aperture mask to reduce aberrations further - and again loosing some of the light 3. reduce sampling rate so that residual blur does not impact image sharpness as much
  7. Hahaha, this is great. Without looking at the legend to see which filter is which, and just by looking at the graphs - I said to myself: "I'd prefer the green one ... "
  8. While we are on the subject - for example Samyang 135mm lens - can someone quote sub 500 list of gear that is needed to capture image with said lens? I think that, since we are discussing the subject - we might as well give some sort of recommendation as option for people to follow.
  9. I think that this depends on how "out of the box" you want to think and how much are you willing to pay in other ways (your time / effort rather than money). 500 pounds for working rig is tall order if one is starting from scratch, but here is what I'd do. Get second hand Eq3 / Eq5 type mount (later is obviously better) - manual of course - no motors Get second hand old DSLR for cheap. DIY stepper motor to drive RA axis of the mount at wanted rate (some micro controller programming required as well as soldering and fashioning some sort of bracket for the motor). Get OTA like this: https://www.firstlightoptics.com/evostar/sky-watcher-evostar-90-660-az-pronto.html (maybe even get it new and then sell all the bits except the OTA itself). Get Wratten #8 yellow filter and you are ready to go imaging ....
  10. I had an experience once of seeing extremely bright light in the sky that sort of seemed to "follow" me. It was not airplane as it had to move directly towards me and this bright light lingered in the sky long enough that normal airplane would simply fly over. It started as just a tad brighter star out of place - but then I noticed it moved and brightness increased to insane levels (I was out observing with a telescope at my friends house). At the time - I could not explain it, but later concluded that it must have been Iridium flare (at the time I was not aware of their existence).
  11. Not sure if it is a rabbit hole. I do encourage all to side step on any sort of dogma when working with science - either those "well established" ones - but also ones that they might not be aware of. Maybe best approach is: - go where the evidence leads you but also that famous - "When you have eliminated all which is impossible then whatever remains, however improbable, must be the truth." I do agree with you that we should keep things to the spirit of the thread, so take what I've written above in that context.
  12. Can we discuss faith? Surely, most of the science relies on faith. I for one have faith that I'll wake up tomorrow and earth will still be orbiting the Sun and laws of nature will still be the same. Heck, even if I don't wake up tomorrow, I have the faith that the same will happen. To be honest, at slightly higher age, I'm wary of such bald claims. And this is not religious view - it is purely scientific.
  13. Now we know it's possible - but we don't know how probable it is
  14. Much easier to find on distant worlds than signs of unintelligent life (although latter is much more probable).
  15. As far as I know - we have been unable to explain occurrence of life so far. There have been numerous experiments - and we have been able to demonstrate that under right conditions amino acids form from inorganic compounds - but we have never seen them assemble into proteins nor do we understand how first DNA formed. Take a look here for example: https://en.wikipedia.org/wiki/RNA_world (one of hypothesis how it happened)
  16. That is way too simplified math. It assumes that every planet that can support life - will actually have life. Odds of life starting from inorganic compounds is much much harder to calculate. Maybe we have indeed won the lottery given that we are here. I really would not be surprised if it turns out that earth is the only place with life in universe (but I would not be surprised either if opposite was true and life was abundant in universe - we simply don't know enough to try to do informed guess).
  17. My first instinct would be to this: Make a parameteric model of gravitational system (which is just volumetric mass/energy density) that sits somewhere along the light path. Then take image of object that produced gravitationaly lensed image and do some sort of optimization algorithm - to try to get the set of parameters that will produce a match between simulation and what is recorded. It is really not much different than any data fitting - you collect your data and you "propose" a function / model that produced that data and you perform a fit. Only difference is in degrees of freedom for your model (number of parameters) and algorithm used for fitting.
  18. I have one and I think it is worth having if you have achromatic refractor. There are several threads here on SGL where people discuss ways to reduce chromatic aberration and Contrast booster is often mentioned as one of the best solutions (next to aperture mask).
  19. Well, this raises interesting question. I'm used to thinking about the image in terms of pixels and not in terms of FOV constraints. If one forgets about the pixels - then same rules apply - except this time FOV is governing factor in any calculated "magnification" - it would be size of display screen and size of FOV in angles (say 1'20" x 50" or something like that). However, I would advise all who think of doing astrophotography to think more in terms of individual pixels and pixel scale rather than FOV. Same FOV can be covered by 1200x900px or 2400x1800px or even 4800x4600px. This will inevitably lead to over sampling in some cases and poor image quality due to lack of enough exposure on such high pixel scales (over sampling) - something people should really attempt to avoid in astrophotography.
  20. If the pixel size is the same - planet size will be the same in both images when viewed 1:1 or at 100% zoom It won't be dot in one image and "normal" in second. That only happens when you resize image for display purposes - when you attempt to put whole FOV on screen or similar. That is also "magnification" - but does not depend on telescope+camera so I don't think it is relevant for original question. You can perform that with any image not just astro images.
  21. Here you are talking about software "zoom" or resizing image to fit the display device and adding another (maybe unnecessary) layer of manipulation. Size of the planet in both images, when viewed at 1:1 or 100% (when 1 pixel on display corresponds to 1 pixel of the image - or "native" display mode) will solely depend on pixel size if both sensors are used with same telescope. Field of view / sensor size plays no part here.
  22. I would recommend 150/750 to a beginner - but not NEQ3-2 and certainly not with that scope. If you can get at least EQ5 if not HEQ5 for that scope, that would be great. Other option is to "downgrade" the scope to something lightweight
  23. Magnification is meaningful with eyepieces because eyepieces produce light beams at angles that are magnified for our eyes to see. With image - it not only depends on device you are viewing it - but also on your distance. Look at any image and then step back few meters and look again - it will be smaller. Since telescope + sensor can be thought of as a projection device rather than magnification device - appropriate quantity to examine is sampling rate - or pixel scale. How much of the sky is covered with single pixel and is expressed in arc seconds per pixel. If you then want to calculate "magnification" - you further need - DPI/PPI rating of your display device and viewer distance. From pixel per inch number you can calculate pixel pitch or pixel size and use that to calculate angle that this pixel subtends given observer distance to the screen - magnification is then ratio of these two angles - one that pixels covers in the sky and one at which pixel is viewed given the pixel size and observer distance.
  24. Well, to be honest - I never saw spot diagram of eyepiece alone - without optics, but if you look at it that way - it is possible to get that small scatter. Most of the scatter of spots comes from primary optics rather than from eyepiece / or from "interplay" between eyepiece and primary optics. We could say that above spot diagram is marketing trick because of that - but I do think it is at least somewhat informative.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.