Jump to content

vlaiv

Members
  • Posts

    13,263
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. Indeed, for RASA8 - they are of proper size (and change with wavelength), but for SharpStar are way too large to be airy disks. Sometimes people put circles in these spot diagrams - as a marketing trick - see that circle? See complete pattern inside that circle? Must be a good (diffraction limited) scope It is for that reason best to actually check using math if circle drawn corresponds to airy disk. You can use this formula for that: http://www.wilmslowastro.com/software/formulae.htm#Airy Also be careful of the scale of squares used - for example Sharpstar F/2.8 may look like tighter spot diagrams over Sharpstar F/3.2 - but check the actual numbers - diagram for F/2.8 is 100µm in size while that for F/3.2 is 40µm. Good thing is that there are actual numbers for each spot diagram (very useful feature). This is for SharpStar F/3.2. Note that Geo radius - which just means further extent seems rather big, but for actual sharpness it is RMS radius that counts more and although we would say that 9.852µm is radius that we use to determine corresponding equivalent diffraction limited aperture - actual performance will be closer to what RMS radius gives. I just realized that in my analysis above I used 12µm in the wrong context - it is radius not diameter. Sharpstar F/2.8 has lower resolution than RASA8" not comparable one. Let's try to figure out equivalent aperture of SharpStar F/3.2 by using RMS radius instead and paying a bit more attention to what it all means. RMS radius will be equivalent to Sigma for Gaussian distribution and we can use Gaussian approximation for Airy disk to compare it with. First let's go ahead and convert it to arc seconds as that is more related to actual resolution rather than to pixel size used. At 640mm 4.38µm is equivalent to 1.41". That is sigma of gaussian distribution. According to Wiki Sigma relates to Airy disk like this: From that we have that Airy disk radius is 4.23" which is equivalent to 30.3mm of aperture. In outer field SharpStar 200 F/3.2 is as sharp as 30.3mm of aperture.
  2. This is "intermediate" stage of debayer process and pattern you are seeing is due to nearest neighbor or bilinear interpolation used to display scaled down version of the image. It is used specifically for bayer drizzle algorithm - when stacking only pixels that are actually present (not black) will end up in stack. Algorithm expect that different subs will be shifted with respect to this and that every output pixel will have at least one "filled" pixel in the stack in its place (or more to be averaged). What you really want is to get color images without these black missing pixels.
  3. I was just thinking about this and I'm not sure that spot diagrams matter that much for imaging as long as you are aware of what they mean and what you can expect. First a bit of background - what are spot diagrams? They are produced by geometrical casting of rays in optical system - as designed (not as manufactured). Ideal telescope will not have spot diagram - it will have single spot and that spot will be in center of a circle representing Airy disk. Here is an example for parabolic newtonian that is perfect design on optical axis (and as soon as you start going away from it - there is coma): Look at bottom row - this simply means - any ray that you cast will end up in single point. That is perfect optical performance, but it does not mean that such telescope will have infinite resolution. There are still laws of physics and interference and ultimately all those rays produce an airy pattern - that is denoted as little circle (when stated so). Often quoted term diffraction limited optics - simply means that all rays cast will end up in circle that marks Airy disk. This does not mean that such scope has the same performance as one with perfect optics - as again laws of physics play a part and once all interference happens - there is additional blurring over perfect design. What we can do when examining spot diagrams is few things: - understand spread of spots in relation to Airy disk diameter - if there are spots outside this diameter - we call optics not diffraction limited. If all are inside - we say, optics is diffraction limited. Diffraction limited just means - telescope will give acceptable high power views of the planets (but by no means best possible - and this even does not touch on diffraction effects - like issues with central obstruction). - we can see if diagram is symmetric or not. Asymmetry in diagram causes more issues than symmetric diagrams - symmetric diagrams (especially rotational symmetry) tend to produce round stars or more pleasing stars when seeing is added (think coma as being totally asymmetric, then astigmatism being more symmetric and just regular star bloat as being completely symmetric). - we can see how star shapes change over the field of view / photographic field - this is important if we want to use sensor of certain size. - we can see how star shapes change over range of wavelengths - important for narrowband imaging and any chromatic effects - we can asses equivalent aperture of a telescope that is diffraction limited. Even if telescope is not diffraction limited as such - there is equivalent smaller aperture for which such spot diagram would be deemed diffraction limited. Let's examine spot diagram of RASA 8" posted above. It is common to draw a little circle to denote diameter of airy disk of a telescope. First let's see if that is the case here. Big box is 18µm, and small square is 1.8µm on its side. Circle at 700nm looks to be exactly two squares in diameter. At this wavelength, 8" scope will have airy disk size of 1.76" or 3.4µm at this focal length. That is close enough to 2 x 1.8µm = 3.6µm This telescope is clearly not diffraction limited - even on optical axis - except for 500nm wavelength where it is on edge to be diffraction limited. It does however have rather symmetric spot diagram across the board - which is good. If we take largest spot diagram of interest at 700nm and "measure" its spread - we get about 6 squares to be diameter - or 6 x 1.8µm = 10.8µm That is equivalent as 50mm F/8 telescope - or rather 50mm F/8 telescope that is diffraction limited will produce the same sharpness as RASA 8" Is 150 F/2.8 sharp star better or worse than RASA 8"? According to it's spot diagram - in worst case, geometrical radius is the worst in field 3 (that is 14mm away from optical center) and it equals 12µm - only tiny fraction larger than 10.8µm of RASA". I would say that these two scopes perform similarly with respect to sharpness. Btw, what level of sharpness is that? Keep your sampling rate above 2"/px with these scopes - as it is very unlikely that you'll have FWHM below 3.5-3.6" with them.
  4. This is brilliant - never occurred to me that one could run a marathon along the equator of neutron star
  5. This one is quite interesting: https://joshworth.com/dev/pixelspace/pixelspace_solarsystem.html This is also nice: https://www.youtube.com/watch?v=GoW8Tf7hTGA It turns out that youtube starts pumping them out dime a dozen https://www.youtube.com/watch?v=02Kgf9dCgME Then there is this image (warning - large file 46mb): https://upload.wikimedia.org/wikipedia/commons/8/83/Location_of_Earth_(9x1-English_Annot).jpg
  6. Barlows tend to push focal plane further out so you'll just need some extension at camera side, alternatively, if you leave imaging camera as is - you might be unable to reach focus with guide camera - it will need additional "inward" travel.
  7. Indeed - we are talking about special sort of book describing special sort of event. Nothing wrong in accepting the fact that it was special sort of a star described.
  8. Fast achromats usually suffer spherical aberration in interesting parts of the spectrum and hence are not best suited for narrowband imaging as results that they produce are not quite as sharp. It also depends on actual sample as there is a lot of sample to sample variation. Sometimes you can optimize the telescope for Ha wavelength (or rather skilled optician can do that by varying distance between lens elements) but that would throw off correction in OIII (other part of spectrum). Have a look here for example: http://interferometrie.blogspot.com/2017/06/3-short-achromats-bresser-ar102xs.html Interesting excerpts from that article: However, you can use longer focal length achromat that has good reputation (TAL100R for example) if you are willing to give away some of the FOV and know adjust your sampling rate to suit your need ("slow" scope is not necessarily slower than fast scope if you match aperture and sampling rate - only thing you loose is FOV).
  9. I've used hub on my ASI1600 couple of times (for guide camera on OAG) - and while I was using only one device connected to it at the time - it worked without issues provided that you have enough power to the camera (I use 3A 12V) and that your USB cables are good quality ones. USB hub won't work if you don't provide external power to the camera.
  10. Of course you can and you should. Unfortunately these are not very reliable as even small temperature difference will cause significant signal change. Sensors have dark current doubling temperature of about 6 degrees. This means that your dark current grows double for every 6C° increase in temperature. It is power law so we can see how much it raises in for example 2C° change - that is not major change and will happen between start and end of imaging session (for those that shoot darks at the end to try to match temperature) and certainly will happen between two days. Change is actually 26% in dark current. You'll be leaving about 1/4 of dark current in if you shoot your darks at 2C° lower ambient temperature. Why is this important? Because it creates problems with flat calibration later on. I recommend a few things to try to fix this: Shoot half of your darks before imaging run and half of them after. If temperature change is fairly linear (and it usually is unless cold front swoops in and temperature suddenly drops) - you have fair chance that things will average out. If your sensor has stable and usable bias (and most DSLRs have it as far as I understand) - use dark optimization. It is algorithm designed to scale your master dark to compensate for any temperature changes.
  11. In most cases - target is far fainter than background sky. For example, outer parts of brighter galaxies are around 26-27 mag / arc second squared. If you are lucky, your sky is 21.8mag to mag22 and in most cases these days it is below mag21. Yes, that is around 5 mags of difference in some cases - or target brightness is only 1% of sky brightness (mag5 = x100)! I image from mag18.5 skies - very high light pollution, border of red and white zone. I plan to move to mag20.8 skies and that will be major improvement for my imaging. Even if target brightness is so low - we can still pull enough SNR with long exposure - simply because shot noise grows as square root of signal. Both target and LP signal grow linearly with time - it's just that noise grows as square root of that and image for enough and target signal will overpower LP noise even in worst conditions. There was, at one time, belief held by many that you can't record faint signal that is below noise level - or that you can't image in heavy LP - you can do both - provided that you spend enough time on the target. If you want to roughly asses magnitude of the target - use Stellarium. It gives average brightness per arc minute squared as surface brightness. That is only average but good indication. Oh, look at that - it has been changed - it now shows in per arc seconds squared (it used to be per arc minute squared - but conversion is straight forward, one just subtract a constant - 8.89 or something like that - can be calculated - yep it is that one and value is obtained by 2.5 * log(3600) - ratio of surface of arc second and arc minute squared) That is like I said only average value - 21.45 as core is much brighter than faint outer arms. I once did rough magnitude map of M51 - let me see if I can find that. Well - you get the sense - cores are at mag17 and fainter parts around mag25-26
  12. You are asking about slightly different process - once you complete debayering and you have color information - how to make that color information look right to human eye - color balancing. The reason for astronomical cameras not having this feature integrated is simple - it is not consumer grade product. Most astronomical cameras are either scientific sensors (where you don't want to mess with the data) or for industrial / surveillance application - where application will dictate color balance. On the other hand, DSLR cameras that produce images that people look at - they have that and a bunch of other features related to that (vivid color, white balance, different mood settings and so on). Answer to your question is unfortunately not straight forward - or rather you can do either - adjust each time or find best settings and leave it at that - depending on what effect you want to achieve. On one hand - you should find best settings and leave it like that. This is because color balance is used to offset different lighting conditions (sunshine, artificial lights, cloudy day, etc ...). In outer space for planets of our solar system - there are no different lighting conditions - they are always lit up by exactly the same sun in the same way. For this reason, you can find best color balance settings and leave it there. On the other hand - we are not viewing planets from outer space but from under our atmosphere. Our atmosphere distorts color, as can be easily seen in this comparison images: sun color when high in the sky: sun color at sunset Sun is actually white-yellowish, but when that light passes thru our atmosphere - blue wavelengths get scattered away (thus making sky blue) and sun turns more yellow. More atmosphere light travels thru - less blue it has and stronger yellow/red cast. Same thing happens with planets - if you want to have same color of the planet high in the sky and lower down towards horizon - then you need to slightly adjust your color balance between the two. Hope this helps.
  13. Yes, to expand to answer already given - image prior to debayering is like monochromatic image. It is in fact monochromatic image with different pixels having different intensity because they recorded different wavelengths of light. Enlarged, it looks like this: Debayering is process of turning this information into three separate color channels - three monochromatic images that will not have checkerboard pattern and will contain correct color. There are three principle ways of doing this: - interpolation - separation - superpixel mode In interpolation mode - software "makes up" missing pixels by interpolating surrounding pixels of the same color. Simplest way of interpolating is linear interpolation - in 1D that would mean - take pixel to the left of missing and take the pixel to the right of missing and average their values to replace missing value This mode makes up values and for this reason is not my preferred way of doing things for astronomical applications - but is wide spread in regular use - probably because they are selling sensors that have 24MP or 6000x4000 and people expect to get 6000px x 4000px images from this sensor. Also critical sharpness/accuracy is not needed in daytime photography. Separation mode works by simply extracting 4 sub fields from image - each containing only pixels at same position in bayer matrix. Only red pixels are extracted to one image, only blue to another, only "upper left" green (sometimes denoted as GR) to one green image and "lower right" green (denoted as GB) to another green image - so you end up with 1 red, 2 green and 1 blue mono images. Each of the images will have twice lower resolution than original bayer matrix image (simply because that is count of corresponding pixels). Maybe best explained by image / diagram: Here it explains with interpolation - by making up blanks, but instead of that - imagine you "squish" R and B samples - simply delete missing spaces. This can be done as samples are not "squares" but are just points and sampling rate is distance between points - in that sense - we take just points that we have and say we have longer distance between them - lower sampling rate / lower resolution. In the end superpixel mode is sort of combination between these two - it takes one group of 2x2 pixels and makes up single pixel out of them - it takes R from R, B from B and G makes out of (G1+G2)/2 - average of G pixels. I personally prefer and advocate separation method for astronomy.
  14. There are different types of noise that add up to form "background" noise. There is shot noise - which is associated with target itself and depends on strength of signal (square root of it), then there is LP noise - which is just the same thing but related to sky glow that is signal that we don't want and remove in processing, then there is dark noise - which is the same thing associated with build up of dark current signal (that we remove in calibration) and in the end there is read noise. Out of these four main types of noise, only read noise is "per exposure" - all others depend on time - more time, more signal gathers and higher each associated component of the noise (as all three are equal to square root of respective signal). If read noise was 0 then there would absolutely be no difference between 1x600s and 600x1s (or any other combination of exposure lengths that add up to the same total time). However, since read noise is not zero, there is difference and difference depends on how small read noise is in comparison to any of other noise values. Common rule of the thumb is to have read noise be smaller about 5 times than the highest of other components - which is usually LP noise since we shoot with cooled cameras and image very faint targets. This ensures that SNR difference between the stack and one single long exposure is few percent. Noise adds like linearly independent vectors (square root of sum of squares) while signal adds like regular addition. Imagine you have R being read noise and LP being light pollution noise and 5R = LP, or read noise is 5 times smaller than light pollution noise. Let's add them up. We will have sqrt( (5R)^2 + R^2 ) = sqrt(25R^2 + R^2) = sqrt(26R^2) = R*sqrt(26) = ~5.099 * R So resulting noise is 5.1R or 1.02LP about 2% higher than LP alone. Now let's try that same thing if R=LP, or when read noise is equal to light pollution noise. sqrt(R^2+R^2) = sqrt(2R^2) = R*sqrt(2) = R*1.4142 .... Now we get result as if LP noise is increased by 41% - much higher increase. This shows that if you select such exposure that read noise is 5 times less than light pollution noise - it is like imaging single exposure in 2% more noisy light pollution (or light pollution being only 4% as strong). Imaging when you have the same level of read noise and LP noise is like imaging single exposure in twice as strong light pollution! In order to compare the actual signal - you need to convert ADU to electrons. CCDs usually have fixed e/ADU conversion factor and ATIK 383L has it at 0.41e/ADU (screen shot from ATIK website) This means that you need to multiply pixel values from ATIK image by 0.41e/ADU to get number of captured electrons. With DSLR it is not so easy as they have changing gain - each ISO setting has different e/ADU value and you need to figure out e/ADU value for the gain setting you used in order to get actual electron count so you can compare the two. Only then you'll be able to tell which one recorded stronger signal.
  15. I don't know how DSS works - but for calibration you don't need to do color separation yet, and indeed it is better not to do it until you finish calibration.
  16. Suppose we are imaging with very fast system 8" F/4 newtonian in rather poor light pollution (mag20 skies) ATIK 383L will have something like 8e of read noise. In above conditions for luminance (and remember that these are bayer filters so each will receive less signal by 1/3) in 5 minute exposure, sky background signal will be about 571e and associated noise will be around 23.9e - only x3 more than read noise. In order to get to x5 read noise you need to push sky background to around 1600 or x3 more - or around 15 minutes. This is in bortle 5 skies / mag20 with F/4 system and mono camera. In darker skies and slower systems - LP will simply be less and read noise will dominate things more and longer exposures will be beneficial (one of the reasons people use longer exposures in narrowband as LP is dramatically cut down).
  17. Is that so? I was under impression that they use actual stars - like mag10 or something that is within a reach of smallish aperture. You know - your star, also known as TYC 2861-1890-1 or 2MASS J03264757+3817176 or Gaia DR2 234879052611521920 Very nice mag 11.8 star, you should be proud
  18. If planetary is your main objective - I would suggest that you calculate how much barlowing you need in the first place, and then decide based on that. Your camera has 2.9µm pixel size and optimum focal length for 130mm aperture and mono camera is around 1479mm Since your camera is color - double that for 2957.28 mm. If your focal length is 920mm then you really want x3.2 barlow. Go with x3 barlow / telecentric lens. After processing, just reduce image back x2 in size to return to optimum sampling rate if you want your images to be sharp. Alternative is to use x1.5 barlow and then to bayer drizzle - but not sure if that is still available in AS!3 (option to use it was gone last time I checked and I don't know if it is now default mode for debayering or something).
  19. Up until recently I was convinced that calibration in DSS was done in 16 bit mode. Now the software is open source and can be checked, but if you look at some tutorials - they often recommend using median for method for stacking darks and flats. Why? Because rationale is that median is the same as mean with gaussian type distribution - except you don't need to do any rounding.
  20. Powerseeker 70mm has 700mm of focal length. You calculate magnification that eyepiece gives you with your telescope is to divide focal length of telescope with focal length of eyepiece. Barlow when inserted before eyepiece will double (x2 barlow) - focal length of telescope, or in another words will double magnification you calculate from focal lengths. GSO super view on its own will give you 700 / 20 = x35 power Celestron zoom eyepiece will give you between 700 / 8 and 700 / 24 = x87.5 - x29.1667 power Your telescope supports up about x140 power (more likely around x100). That is enough power to see details on the planets - Jupiter and Saturn, but Mars is too small target to see anything but tiny red dot. Ideally, you want to put barlow and then your zoom eyepiece (like in this image): just use your 8-24mm zoom instead of Omni 9mm plossl show in image above. Turn your zoom eyepiece to about 16mm mark (it should be marked and even have click stops, and I believe one mark will be 16mm). If eyepiece is not marked for focal lengths, then turning it to approx 16mm is easy - it is about half way between max left and max right. In any case - zoom eyepiece is handy for precisely that reason - it zooms, so you can put it to lowest power to put the planet in the field of view - then zoom a bit in to see if you can spot additional details - and you continue zooming in (and adjusting focus if needed) until you get the largest image that is not too blurry - dial in the magnification for best view.
  21. Not necessarily. I use ASI1600 and it has peak QE around 58% so not a lot higher than that, but it produces excellent images. Compare that to 40% peak QE of KAF8300 and that is increase of 45%, so yes, peak QE comparison means - 6 hours with KAF8300 will be needed to match 4h with ASI1600, but that is not as drastic difference as one would expect. Problem with sensor like KAF8300 is the way it is handled/used. Todays trend is to go for short / medium exposures and lots of them. That is not the style of imaging with sensors like KAF8300. How long are your exposures? Anything less than 10 minutes and results will be poor. Ideally aim for 15-20 minute per exposure. One of the things that distinguishes CMOS sensors to CCD sensors is level of read noise. My ASI1600 has 1.7e of read noise. KAF8300, depending on camera vendor will have about 6-8 times more than that. Only difference between 60x1minute and 3x20minute of exposure is in read noise - higher the read noise of camera - for longer you have to expose to get the same results. I would easily expose for x10-20 longer with CCD like KAF8300 than with low read noise sensor like ASI1600. If I opt for 1-2 minute exposures with ASI1600 - go figure how long your exposures need to be with CCD like KAF8300 to overcome read noise. Another issue with KAF8300 might be in processing. These cameras produce 16bit data, and as soon as you start calibrating that - you want to be in 32bit floating point mode, but not many people use 32bit workflow from the start.
  22. What will it be for? Barlow lens tend to push focus position further out - something easily handled with extension tube. If it is for imaging - question is, what sort of imaging, or rather what sensor will you be using (size) and for what purpose. With planetary imaging - any barlow will really do. If you want to cover the whole sensor - then you need to think sensor size - not many barlows will cover APS-C sized sensor - at least not those in 1.25" format. For regular barlow / planetary use, I guess this will be sensible budget option: https://www.firstlightoptics.com/barlows/bst-starguider-2x-short-barlow-lens.html do bare in mind that sensor/barlow distance determines magnification factor. Telecentric lens don't significantly change the focal point nor do they significantly change magnification with sensor distance. I've heard that these are very good value for the money: https://www.firstlightoptics.com/barlows/explore-scientific-2x-3x-5x-barlow-focal-extender-125.html and they also come in 2" version - which would be good for larger sensor: https://www.firstlightoptics.com/barlows/explore-scientific-2x-barlow-focal-extender-2.html
  23. Sensors have QE smaller than 100% - roughly around 50-80% depending how good they are (some even as low as 25-30%) - so you are already loosing as much as half of photons falling on any one particular pixel. This works for both absolute and relative photometry - you just correct for it (in relative photometry - there is nothing to be done as it affects both target and comparison star). It is not much different than using telescope of different diameter - just more or less photons gathered over period of time. Want better SNR - change period of time over which you collect.
  24. Yes you can - you don't need to debayer image using any sort of interpolation - you can simply extract bayer matrix per channel as separate image and use those as monochromatic image. Another thing that you can do is to create color conversion model. Trichromatic sensors with bayer matrix have certain QE response curve in each color. V filter also has certain response curve. You can derive transform matrix that will best model V response curve based on R, G and B curves. To put it simply: You scale blue and red and subtract/add from/to green so that marked out areas sort of cancel out or add up and you get close to this curve:
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.