Jump to content

vlaiv

Members
  • Posts

    13,263
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. Yes, ImageJ has very strange handling of color files and it's best not to treat them as color images - but rather as separate channel images. It has concept of : - stack - hyperstack - color stack - composite image And I have no idea how it automatically transforms between these. If you for example want to make RGB image from channels, you need to make stack of 3 images - R, G and B. Then convert that to 16 bit mode (I do all my processing in 32bit float point - that way you won't introduce any rounding errors that are significant) it uses display brightness / contrast controls - that are otherwise non destructive and are just for display purposes. Once you are in either 8bit or 16 bit mode - then you can use Stack to RGB command. In any case, workflow that I would recommend would be: 1. use software like FitsWork to convert DSLR raw files to fits. Resulting fits will be 16 bit format 2. use ImageJ to convert those 16bit fits to 32bits and then calibrate your images there (make stacks, subtract, divide - regular calibration workflow) 3. use debayer plugin to debayer images. One that I've found online works for 16bit images, but I have few that work with 32bit images - so if you need them just let me know and I'll attach them here (you can either do interpolation or split bayer matrix - which ever you like) 3.1 alternatively if you don't want to deal with debayering - you can just bin your raw images 2x2 - that will add 4 bayer pixels together and you'll get photometric result over whole spectrum. 4. open resulting images in AstroImageJ and measure things .... If you need any help with ImageJ / Fiji (how to calibrate or debayer plugin or whatever) - let me know, I'll be happy to help.
  2. Just had this come in after two month wait for stock to refill: It accepts eyepieces - so I'm counting it as a mini scope I'm seriously surprised that 32mm Plossl works well with this 32mm F/4 scope and that scope is quite sharp and does not display rainbow on any mention of light falling on it's objective. Yes, that is x4 32mm with 8mm exit pupil - wide field "telescope" . Here it is in imaging role: image of international sharpness standard next to international color calibration standard This is with ASI178mcc - pixel size 4.8µm (super pixel debayer - for those who care about such things). Field curvature is rather excessive and there is some lateral chromatic aberration as well: Center / edge
  3. There is one possibility that you can test easily. I think it has to do with filter sticking out of focuser tube and blocking the light. Light coming at an angle will be affected, while one coming straight down or from the opposite angle will not. This means that only stars to one side will be affected - like in your image. Here is diagram: Not the best diagram, but you get the idea - if focuser is protruding into the light path - it will create optical artifact. This is known thing with 130PDS for example, have a look here: https://stargazerslounge.com/topic/210593-imaging-with-the-130pds/page/75/ People cut of end of their focuser tube. In any case - this is something you can easily test - just put a piece of cardboard to extend focuser tube a bit instead of filter - does not need to be much thicker than filter cell - if effect is there - it is not due to filter but rather from light blockage.
  4. Hi and welcome to SGL I'm afraid that £100 will really get you only very basic telescope. Do you know what star is it that you want to observe? I ask because size of telescope and brightness of the star will determine if you'll be able to see it at all with that telescope (this does not mean that you won't be able to see other stars and other interesting objects). In that price range, maybe this telescope would be best suited: https://www.firstlightoptics.com/beginner-telescopes/sky-watcher-mercury-707-az-telescope.html Unfortunately stocks are low at the moment due to very high demand so you'll have to probably wait some time before it can be delivered.
  5. I would say that is quite sensible. Rest is down to probability.
  6. Let's first start that universe has a center - and it's pretty much every point in universe (for this reason one might argue that there is no center - as center can be only one?) Universe also has an edge - and this edge is different for every center. There are actually many horizons - which can be thought as of edges of the universe. Back to original notion of searching for life - that is something that we can both do and is sensible to be done - only in immediate vicinity. This means stars systems that are our very neighborhood in our own galaxy. Size of observable universe is 13.8 Bly (not sure if it helps here to think in comoving coordinates as causality does not travel faster than light), while size of our galaxy is about 100,000 Ly in diameter. That is 5 orders of magnitude difference in size. In our own galaxy there is like a quarter of a trillion of stars. So yes, we search in our immediate vicinity, simply because: - our sensor can only work with these distances - any signal sent by alien race falls of in intensity with square root of distance - if for example we would want to exchange a few ideas with them - it would take about 10000 years to do so for near by planets (500-1000ly distance and then 5-10 round trips in questions and answers ). Btw - your diagram above is wrong in more aspects - like I already mentioned - there is no preferred center of the universe and it's best if you start with earth being the center as consequently "edge" will be well defined with respect to us. You are mixing comoving distance to edge of observable universe with "time of flight" distance to the edge of universe - one being ~46 Bly radius while other being ~13.8 Bly. Time flows in each point in universe so every point in the universe is "now" 13.8 By old (note that word "now" is in parenthesis as there is no simultaneity in relativity) - goldilocks zone according to your assumption is now everywhere in universe.
  7. According to the rest of the page - notation used is one that I assumed Function to the -1 means inverse function. You can see that from other calculations that follow, namely this one: This is expression for time of the Sunrise which is derived by finding HRA for which elevation above horizon is 0. If we take original formula and do that: sin ( 0 ) = sin(delta) * sin(phi) + cos(delta)*cos(phi) * cos(HRA) 0 = sin(delta) * sin(phi) + cos(delta)*cos(phi) * cos(HRA) // since sin(0) is of course 0 - sin(delta) * sin(phi) = cos(delta)*cos(phi) * cos(HRA) cos(HRA) = HRA = arccos(- sin(delta) * sin(phi) / ( cos(delta)*cos(phi) ) ) Now look at the Sunrise formula and you'll see that it actually reads: Sunrise = 12 - 1/15° * HRA - TC/60 where HRA is arc cosine of of expression - sin(delta) * sin(phi) / ( cos(delta)*cos(phi) ).
  8. My bad, I interpreted sin to the power of -1 to be "inverse" of sin function not reciprocal value (as would be if you actually raise it to the power of -1). Often in literature - inverse function is denoted as fn to the power of -1. For this reason I interpreted original formula to read sin to the -1 or arc sin. Are you sure that they meant 1/sin instead? It does stand to reason that sin of alpha is equal to combination of sine and cosine functions on the other side and sin to the -1 would be indeed arc sin in that case.
  9. I think that you are over complicating things here (maybe not). If you want HRA from above formula that is straight forward: sin(alpha) = sin(delta)*sin(phi) +cos(delta)*cos(phi)*cos(HRA) sin(alpha) - sin(delta)*sin(phi) = cos(delta)*cos(phi)*cos(HRA) cos(HRA) = ( sin(alpha) - sin(delta)*sin(phi) ) / ( cos(delta)*cos(phi) ) HRA = arccos( ( sin(alpha) - sin(delta)*sin(phi) ) / ( cos(delta)*cos(phi) ) ) But you only need to look up RA/DEC conversion to Alt AZ and use altitude component.
  10. That is from Mini II Dob page - I would say Mini II is successor to other one and has better rubber feet and probably better azimuth motion.
  11. Reminded me of Ronja http://ronja.twibright.com/tetrapolis/spec.php It uses lens for transmission part: I guess small mak would have much better performance than these Chinese magnifying glasses . Still interesting project for anyone interested.
  12. Here is good resource on DSLR model statistics: https://www.photonstophotos.net/ It says that 450D has: 14bit ADC, 30555 FWC, ISO 212.4 as unity gain (at ISO 212.4 - gain is 1 e/ADU, while at ISO 100 it will be 30555 / 16383 = ~ 1.865 e/ADU) Btw, you can measure these values yourself with a test protocol. Let me find that for you. For example this: http://theory.uchicago.edu/~ejm/pix/20d/tests/noise/index.html
  13. I'm considering one of these instead of binoculars as well. I only use one eye when observing (other has severe astigmatism - can't be corrected with glasses) so getting binoculars is kind of 50% waste These look like decent monocular replacement - with added zoom ability. Our local retailer added Levenhuk Blaze Base lineup that is very affordable - 60mm model is only 70 euros. I do worry about quality of this item. Then there is this: https://www.teleskop-express.de/shop/product_info.php/info/p7320_TS-Optics-Zoom-spotting-scope-15-45x60--45----Top-price.html Which would set me back 89 euro total after VAT and import duty. Somehow I feel it would be better quality item than Levenhuk?
  14. TS has couple of models that accept 1.25" eyepieces - so that is nice bonus. For example this 65mm model: https://www.teleskop-express.de/shop/product_info.php/info/p8489_TS-Optics-Optics-Spotting-Scope-BW65Z--16-48x65-mm---1-25--interchangeable-eyepieces.html 80mm is quite a bit more expensive though.
  15. I know, I know - these usually have 45° diagonal that is suited for daytime use and awkward at night and will probably cause spikes on stars or whatever, but I wonder - could these be good value for the money for astronomical use? There are a lot of people on tight budget, I mean - like really tight - £100-£150 and they often end up getting beginner scopes in 70mm class on wobbly tripods. How about this instead: https://www.firstlightoptics.com/acuter-spotting-scopes/acuter-natureclose-st65a-16-48x65-waterproof-angled-spotting-scope.html Or maybe this: https://www.firstlightoptics.com/all-spotting-scopes/acuter-natureclose-st80a-20-60x80-waterproof-angled-spotting-scope.html For £155 - you essentially get erecting 45° prism, zoom eyepiece, F/6 80mm achromat in a self contained lightweight package?
  16. I think that there is extensive geometric distortion as well due to relativistic effects - namely length contraction - which happens only in direction of travel. https://math.ucr.edu/home/baez/physics/Relativity/SR/Spaceship/spaceship.html
  17. I think your collimation is fine - it is tilt that you are seeing in your image
  18. Given that this is one of those topics to which everyone would have something to add or say - thanks for keeping it real. Very nice selection of beginner imaging scopes given.
  19. Indeed, however, engineers designing the chip usually match ADC bit count and full well capacity together with read noise. Minimum gain that you can set is usually derived as FWC / max_bit_count. In situation where you have 20000e FWC and 12bit ADC (with max value of 4095), resulting min gain would be around 20000/4095 = ~4.915. In fact that is the way you measure FWC and for all intents and purposes - that is maximum FWC that you can expect. Actual FWC could be say 22000 - but you'll always see it as 20000 since that is saturation value that you can get. Read noise is matched with this to mask quantization effects. Since ADC numbers are whole numbers and when you divide with a number larger than 1 you get rounding off error. Say you gathered 12e and your gain value is 7e/ADU. Number that you end up getting is 12/7 = 1ADU (can't have decimal numbers here). When you try to get number of electrons back you need to multiply with gain value so it would be 1 ADU * 7e/ADU = 7e. There is 5e of error due to this rounding in this example. In order to minimize this masking technique is often used - random noise is added (this is in signal processing). Sensors have read noise so you don't have to add it back in, you just need to match it against FWC and ADC (gain setting) so it masks quantization properly. Large value of minimum gain requires larger read noise. Here are some examples: ASI174 - FWC 32K, min Gain ~8 e/ADU, read noise ~5.8e ASI1600 - FWC 20K, min Gain ~ 5 e/ADU, read noise ~3.6e Both cameras have 12bit ADC (max 4095) ASI071 - FWC 48.6K, min Gain ~2.95 e/ADU, read noise ~3.25e This one has 14bit ADC so max value is 16383 ( * 2.97 = 48657.51). Read noise here is larger than it needs to be but I guess they hit the limit and could not make it smaller at this gain setting.
  20. You can cause pixel to saturate if you use high enough gain, but that saturation has nothing to do with electronics. It has to do with the fact that ADC has fixed output range. Say you have 14bit ADC in your camera (most DSLR cameras have either 12bit or 14bit). This means that you can have 0-16383 as a result of ADC conversion. Resulting number is referred as ADU number (short for analog-digital unit I think). Value is obtained by measuring number of electrons in pixel potential well and dividing with e/ADU gain factor. Say you have 0.1 e/ADU gain selected (which is high gain) and you have 6000e in your pixel well. Resulting ADU will be 6000 / 0.1 = 60000 but you can't write it down in 0-16383 range so it gets clipped to 16383ADU. You have saturated pixel. Same scenario of 6000e and gain setting of 1 e/ADU will result in 6000ADU - without clipping since it falls into 0-16383 range. Gain is just a multiplier - it does not contribute to sensitivity of the camera as many people believe (although, because of construction of CMOS sensors, it does have impact on read noise of camera - higher gain means slightly lower read noise).
  21. It looks like that is indeed the case - best course of action would be to return the scope to get it collimated.
  22. I don't think this is chromatic aberration in the image. I think it is some sort of dispersion. Either atmospheric dispersion, or what is more likely - something to do with FF/FR and tilt. From what I'm seeing - blue is shifted downwards and red is shifted upwards in these star images (blue is not halo around whole star). Also, note bottom corner stars: Other corners are fine except for opposite corner that is the worst offender in dispersion effect (at least it seems so to my eye): I don't think this is due to scope, but it could be due to collimation of the scope. Maybe best thing to do would be to take a series of test images: - star cluster high in the sky so you avoid any atmospheric dispersion - with FF/FR removed - and examine center of the field - to see what the stars look like. - aiming near zenith will also minimize seeing. Atmospheric seeing impacts blue wavelengths more then red (as they are shorter), and often blue is slightly more blurred than red and green channel due to this. If you still have this RGB separation without FF/FR - then it might be issue with collimation or something. If you don't have it without FF/FR - then look into FF/FR connection. Do you have threaded connection or you use 2" clamping. I had issue with 2" clamping mechanism and this scope FF/FR combination. It would shift - and one day I would have good images and next day - astigmatic stars in corners.
  23. We started here with a series of measurements and question was: Should we stack images representing measurements (which is same as averaging / summing them) and then reading off value, or should we read off individual value from each of measurement and then average those? From perspective of actual resulting value - there is no difference, both methods will give the same result. Reading off individual values and then calculating average but also other statistical tests will tell us something about that data set - even if we don't know exact model (we can perform different tests to see if data fits one distribution better than others). Having single number will tell us no such thing.
  24. Mean of 10 x 1s has the same SNR as single 10s unit (with exception of read noise contribution). You can figure out how big your measurement error is from 10 x 1s, but you can't do that on a single 10s measurement.
  25. Average value is the same but statistics of it is not the same. Let's say you have one readout based on stack. Stack is by the way just average value (or at least it should be if one wants controlled environment - no alignment / no pixel correlation and such - shift and add at best). This value is some number 120, for example. How good is your measurement? Can you tell from that single number? What is probability that true value is measured value +/- 5 for example - or in rage of 115-125, can you tell? Can you trace the source of system error if you just stack and have single value? In this particular case, each measurement introduces some noise into result and there is a difference between 20000 x 1ms and 20 x 1s and 1 x 20s in achieved SNR, but depending on level of this noise, you might opt to actually do 200 x 0.1s measurements and then decide how much you want to stack and how much you want to save for statistics. You can either have 20 x 1s samples or 40 x 0.5s samples.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.