Jump to content

vlaiv

Members
  • Posts

    13,264
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. For some reason it says 404 not found for me
  2. I'd say very limited usage - maybe Venus UV imaging to capture cloud formations, but that is about all I can think of. Atmosphere is not very transparent in UV part of spectrum, so faint UV signal is not very interesting.
  3. Simply put - take both filters response curve graphs and multiply them together and that will be resulting filter response curve. This means that some combinations of filters simply won't work. Take for example - blue filter stacked with Ha filter - you'll get complete block over whole spectrum as blue passes only 400-500nm while Ha passes around 656nm. Some combinations will work like only single filter - take UHC type filter that passes both OIII and Ha and stack it with Ha filter for example - result will be the same as using Ha filter alone. Sometimes combination of filters makes sense - you can take for example Astronomik L3 filter and combine it with R, and B filters to remove unfocused parts of spectrum if you have residual CA (say fast FPL51 doublet scope or something like that). As already pointed above - there will be increased possibility of reflection artifacts with interference filters. Or skip B and collect only LRG
  4. It will reduce a bit of blue and red bloat on stars. Being doublet scope - it has some residual CA. For visual that is probably not issue since eyes are not that sensitive in far parts of spectrum (around 400nm and around 700nm on the other side of visible spectrum). Astronomik released 3 different luminance (UV/IR cut) filters - L1, L2, and L3 - each one a bit more restrictive in far parts of spectrum: With FPL51 ED doublets as fast as F/7 - you want to filter out those parts of spectrum to reduce that residual CA. Even some FPL53 doublets that are faster show slight blue fringing on some stars (that is why I asked about model).
  5. Is that taken with 102ED F/7? What model is that scope (FPL-51 or FPL-53)? In either case, have you considered adding Astronomik L3 filter to it?
  6. It does matter. Using higher F/ratio means that you either need to use longer exposures - which is bad because you want to freeze the seeing or read noise becomes too significant and you end up with noisy recording (especially in poor seeing when you can only stack hand full of subs). Using F/24 instead of F/15 spreads light over (24/15)^2 = x2.56 larger area - hence signal per pixel goes by factor of x2.56. In order to reach same signal level per exposure you need to increase exposure length x2.56. Instead of using say 5ms exposures - you'll end up using 12ms + exposures and that makes all the difference in freezing the seeing. You are trying to compare two very different setups for results and if you don't control other variables - you might conclude that above result is down to seeing or cooling or whatever - but in reality, it might be because of improperly chosen exposure length or something else entirely.
  7. Best F/ratio for ASI224 is around F/15, with both of your setups you pushed F/ratio quite a bit beyond that (F/24 and not even sure I can calculate effective F/ratio for FC100DZ). What capture parameters did you use (exposure length, duration of recording, number of stacked subs and so on)?.
  8. 150PDS + coma corrector, HEQ5 Add AZ mount for visual. Visual is pain on EQ mount with newtonian - AZ is better option. Above will actually overshoot your budget by small margin, so maybe second hand HEQ5, or alternatively - EQ5 (but that depends if you value imaging more or less than visual). There are few more combinations like getting AZ-EQ5 for example as single mount and using that in EQ mode for AP and AZ mode for visual (if you want goto for visual).
  9. You can do either - you can make it or if you don't want to mess with things - purchase one. Artificial star is anything that will act as a point source to telescope. If you place it far enough it will act as true star. It is used for different things - like scope collimation, checking optics of the telescope and in general - all the things that you would do with regular true star, but it can be used when cloudy and does not suffer from seeing. There are several ways to make one: 1. small ball bearing will act as artificial star if placed far enough and illuminated by bright source (shiny little ball will have only one point that reflects light in direction of telescope and will look like single dot) 2. Strand of optics fiber cable with LED on one side will act as artificial star Or you can purchase one: https://www.firstlightoptics.com/other-collimation-tools/hubble-optics-5-star-artificial-star.html above is quite "crude" one - at 50µm, but there are models with smaller hole like this: https://www.teleskop-express.de/shop/product_info.php/info/p10781_TS-Optics-artificial-Star-for-Telescope-Tests-and-Collimation.html 22µm or even smaller hole: https://www.teleskop-express.de/shop/product_info.php/info/p7258_Pierro-Astro-PocketStar---Artifical-Star---White-Light---only-9-micron-diameter.html 9µm I have TS one with 22µm hole. Here is example of test I did for Samyang 85mm F/1.4 lens: from left to right: F/1.4, F/2, F/2.8 and F/4 Artificial star was at 5m distance, and test performed in basement Here is another example - comparing lens edge vs center performance on different F/stop settings: (this was done with mono camera and no filter - image enlarged to 400%).
  10. While ASI071 is not the best choice for planetary - it can actually work quite ok if you set things up properly. Main issue with that camera is that it has relatively high read noise (compared to proper planetary cameras). First things first, ASI071 has large pixel size of 4.78µm. This means that you need to barlow your scope to certain F/ratio to get image your scope is capable of. In this case, you need about F/18.7 or round that up/down to nearest whole number - so F/19 or F/18. What scope are you using? From your signature, I see that you have 130PDS and 200P - both are F/5 scopes? You'll need at least x3 barlow or even x4 barlow to get to wanted F/ratio Next is reducing read noise as much as possible. For this you need to use gain value of 200+ - which will give you around 2.2e - 2.3e of read noise. Not ideal, but far from useless. Following step is to ensure you can capture enough frames. Use ROI of say 640x480 - that will give you something like 70fps. Again - not ideal, but not bad either. It seems that this camera does not have 8bit mode - which is shame, but if it does - use that as it will give you another small boost for FPS. Set exposure length at say 5ms (don't look at histogram, image might be dark but that is ok - it will be fine after stacking). There you go, above should be enough to get you started in right direction with this camera. Here is another tip - if you are interested in planetary and you already image, why don't you consider adding guiding to your setup? Most planetary cameras can double as excellent guide cameras.
  11. I'm not particularly versed in interpreting CCD inspector results, but I did look at first two images and I can't really say that there is something wrong with them in terms of tilt or curvature or anything. CCD inspector results do depend on actual stars being imaged. You should really pick rather uniform star field without much in terms of other targets to asses your optics. Another option would be to use artificial star. You can asses things with artificial star without CCD inspector. Just place artificial star in center of FOV and take one image. Then slew scope so that star is in each corner and take another image (same exposure). Measure FWHM of all images and compare. Ideally - you want numbers to be close and corners to be the same (maybe center will not be the same as corners - but as long as corners are roughly the same - it is ok). Flatteners and focal reducers will not have same spot diagram in center and in corners. This can create "sense" of field curvature - where there is none (CCD inspector will see higher FWHM at edges than in center). For example - here is Riccardi FF/FR: RMS spot radius is 3 microns on optical axis - but 6 microns at 11mm away from center. This does not mean that field is curved and that there is defocus at 11mm - it simply means that design of optical element is not perfect (and never is) and that there are some aberrations that make star a bit larger at edge of the field (but much less than if actual field curvature was present). As long as your stars are round over whole field and differences in FWHM are very small - you have good field.
  12. +1 for IR/Light leak IR can penetrate plastics and if you replaced IR cut filter with one that passes more light - you might have IR leak. Alternative is that you forgot to cover finder window.
  13. Well, we can easily test that - we simply take set of bias subs at one temperature and on another and compare the two. It is better to verify to be on a safe side.
  14. I'd say that most software utilizes other down sampling methods as they implement general resampling. Binning is very constrained in how it works - it is applicable only to down sampling and only to integer factors - you can bin 2x2, 3x3 and so on with reduces image size only to half, one third, one quarter and so on ... Not really handy for general resampling operations. It can't be used for image alignment (translation / rotation) for example or reducing image to 75% of its original size. As far as hard evidence - well I just gave you example above that you can easily replicate. I also gave you rationale for why it works. I can put it in a bit more mathematical terms? Say you want to "average" two noise values with arbitrary weights. Let weight of first be w and second 1-w (we want average of signal to remain constant and assume that both samples have the same associated signal component). We are thus adding two noise values a and b with associated w and 1-w weights. resulting noise value will be square root of sum of squares of these two components (noise adds like linearly independent vectors). result = sqrt(a^2 * w^2 + b^2 * (1-2*w+w^2)) As with stacking - we assume that a and b are equal in magnitude - or nearly equal. This is valid assumption if we assume that signal over two values is the same (noise magnitude being sqrt of signal + read noise). result = (2*w^2 - 2*w +1)^(1/2) We now take derivative of above expression and equate that with zero (to find minimum). 0 = (2*w^2-2w+1)^(-1/2) * (4*w-2) or 0 = (4*w-2) / ((2*w^2 - 2*w +1)^(1/2)) From this 4*w-2 = 0 (left part is dividing and hence can't be zero, so right part must be equal to zero) => 4*w = 2 => w=1/2 Resulting noise is lowest when we take both samples with same weight. This logic can be extended to multiple samples. Only binning uses two samples with equal weights (and linear interpolation in this special case). Take cubic interpolation - coefficients are given by expression: https://www.paulinternet.nl/?page=bicubic they are not all equal weights, so resulting noise will not be highest possible (like in 1/4, 1/4, 1/4, 1/4 case) In the end, I'd like to address issue of binning with astronomical images. I recommend binning for over sampled images. Such images won't have higher frequencies to cause aliasing issues to begin with, but regardless of that, I pointed out that there is something called pixel blur that effectively acts as low pass filter for binning. Here is example of what sort of low pass filter I'm talking about. I created Gaussian noise image and then I averaged each group of 4x4 pixels (right image). This is what happens when we bin - we average certain group of pixels - except here I left original sampling rate so we can compare frequency representations of both. Here are respective FFTs - you can clearly see that bin x4 has had low pass filter acting on it.
  15. Sky is very far away and tablet is very close - that is the difference. In ideal world - only rays of light that are few degrees off optical axis should reach sensor (depending on focal length and size of your FOV) - but flat panel close to opening of telescope emits light in all directions and all different angles. In fact, now that I think about it - maybe issue with screen as flat panel is not because it is close - but with viewing angles. Take your tablet and see if illumination and color changes with angle. That might be causing the issue. In any case, you can get flat panel for very small amount of money nowadays. Don't purchase ready made flat panels for astronomy - those are ridiculously expensive. You can purchase simple LED panel light. I recently installed lighting in my new house and had opportunity to see what cheap led lighting looks like - and it struck me - ideal for flat panels for telescopes. They come with AC/DC transformer that you can easily remove and hook them up to 12V.
  16. Are you sure about bias being temperature dependent? Bias is just read out signal and nothing more. There is no time for any sort of dark current to form. They can be noisier on higher temperatures but their signal should in no way depend on temperature.
  17. Depends on camera that you are using. Bias should be in principle good from single session. You can verify this by using bias from one session and then taking another set of bias subs days or weeks later and comparing masters (subtract two master bias files and you should get just pure noise with average value of 0 ADU). Flats and Flat darks should be taken each session unless you keep everything assembled between sessions (like in observatory or just take whole lot off the mount and store it as is). Even then - it can happen for a dust particle to move and then you need to take another set of darks. Darks will depend on you having set point cooled camera (and since you did not mention them - I'm guessing you use DSLR?) - if you have set point cooling - then you can shoot one batch and use them for long time (until something changes like any of your settings or temperature that you can achiever - summer / winter, or you decide to refresh your dark library). In general - if you change parameters (gain, offset, iso, exposure length and so on) - you need to redo your calibration files.
  18. Not sure what sort of mathematical proof are you looking for? Given usual set of down sampling methods available in software - we can both test and derive SNR improvement in down sample x2 case (we simply need to play around with addition of noise which adds in quadrature). Sure it is not conclusive proof that binning is the best in terms of SNR in general - but then again, no such claim is made. By the way, there is something called pixel blur that effectively deals with aliasing effects of down sampling in binning. Sensor pixels do surface sampling rather than point sampling. It can be shown that this causes point sampled signal to be convolved with pixel surface function - or filtered by its Fourier transform in frequency domain. Binning is effectively the same as "joining" surfaces - so it again does filtering in frequency domain. Above statement can be proven, and I did that already in one discussion here:
  19. What scope are you using? One thing to try would be to add long dew shield kind of shroud made out of very dark material (flocked on the inside) and take your flats like that. Flat panels work well with baffled and flocked scopes. Some scopes can't cope with that much light hitting inside of the scope at different angles. Newtonians are prime candidate for this sort of behavior.
  20. I'd say CEM120? CEM70 is replacement for CEM60 (and my personal thoughts on that are that iOptron initially gave CEM60 at too low price and would not risk price increase so they withdrew that model and instead released CEM70 at much higher cost) and both are EQ6 class mounts. CEM120 is equivalent to EQ8 and I'd rather take CEM120 than Eq8. I'd also choose version without encoders as I regularly guide.
  21. I don't really have any particular source, but it is a fact that can be easily verified. Take ImageJ and conduct couple of tests. Here, I'll do couple of examples for you and then you can further experiment to see what sort of results you'll be getting. Just generate image with pure Gaussian (or Poisson or mix of the two) noise and reduce it's size by different methods and measure standard deviation: Results: As you can see - only bin x2 and linear interpolation (which is in this particular case same mathematical operation as binning - average of 4 adjacent pixels with the same weights) have standard deviation that is half of original value. This is the same thing as stacking - SNR improvement is equal to square root of number of stacked subs - in this case square root of number of binned pixels (2x2 = 4 adjacent pixels, sqrt(4) = 2 - noise reduces by factor of 2). If you want intuitive explanation, here is one: out of all rectangles - square has shortest diagonal with respect to shorter side. Only binning uses two samples (I'm now talking in 1D case - for ease of understanding) with equal weights. All other interpolation algorithms use two or more pixels and weights that are no longer equal. Benefit of that approach is that filter more and more resembles box filter - ideal case. In fact, you can analyze the shape of filter that happens in interpolation in following way - create image with random section in it: Translate it (0.5, 0.5) with selected interpolation algorithm (by the way - this method can be used to see how different offsets of translation impact low pass filter as well) and take FFT of both images. Left is FFT of original image, and right is FFT of translated image. Divide resulting FFTs and you'll get filter response: This happens due to properties of Fourier transform (namely shifting function in spatial domain does not alter it's frequency signature) In the end, you can plot profile (X axis cross section for example) of different interpolation algorithms to compare filters: Here is comparison of cubic convolution and cubic b-spline interpolations. It is obvious that cubic convolution blurs image more (for example when aligning frames for stacking). I've done this sort of comparisons before and posted results here on SGL. Let me see if I can find that thread. Here it is (one of them):
  22. Forgot to say - in stacking we can have best of both worlds. Since we need to align images anyway - for that we must use interpolation method that utilizes the least in terms of low pass filter. This preserves sharpness and noise distribution (good for stacking). Binning introduces pixel blur, but if we do it in special way - we won't introduce pixel blur. We can do "split bin". We take each sub and create 4 (or 9, 16, etc ...) images out of it by taking every odd / even pixel into its own sub image. This creates more subs to stack but we absolutely did nothing with individual pixels. Stacking more subs means SNR improvement (same as we get from binning). We just need to make sure we use good resampling method for alignment procedure - namely Lanczos if we have optimally sampled data after "binning".
  23. There is big difference between binning and other forms of down sampling. Binning does not introduce pixel to pixel correlation (which happens with low pass filter) - and has precisely defined SNR improvement. SNR improvement is higher than most other down sampling methods - except those that smooth data too much and hence lower resolution of the image (say bicubic interpolation). Good way to compare different down sampling techniques would be threefold: 1. Generate random noise of known magnitude, then down sample by certain factor (usually x2) and measure resulting noise magnitude - this will show how much SNR improvement there will be (assumption is that proper down sampling won't change average signal value, so we can examine just noise component) 2. Take PSF of certain FWHM that is over sampled by factor of x2 and then measure resulting FWHM after down sampling to measure how much blurring did down sampling introduce (binning actually introduces a bit of blurring and this is known as pixel blur - other methods introduce more or less of blurring, depending on method) 3. Take number of random noise images, resample them down and then stack them and measure resulting noise levels. This measures pixel to pixel correlation introduced by down sampling. In ideal case (binning is example of this) - stacking would produce expected results for random samples - noise would decrease by factor of square root of number of stacked images.
  24. Drizzle and bin are sort of opposite operations. Bin reduces sampling rate and regains SNR in the process. Drizzle is supposed to increase sampling rate but looses some of SNR. Original drizzle algorithm required very precise dithers. I'm not sure how efficient it is in amateur setups, and nowadays pixels are small enough that you don't need to drizzle at all. Drizzle was developed for Hubble space telescope that had massive aperture and long focal length, and scientific CCD sensors used had huge pixels - which led to under sampling because of lack of atmosphere. We don't have anything like that. We have atmosphere that blurs our images. We have small pixels and small aperture scopes (and even large aperture scopes are limited by atmosphere) and in the end - our dithers are anything but precise enough (dithers for drizzle need to be able to point the scope with accuracy of a fraction of pixel - something HST was capable of but most amateur setups are not). On Edge HD 11 you'll need to bin your data and not drizzle. While drizzle is interesting concept to know about - it is really not feasible in most if not all amateur setups.
  25. Yes - just place estimator box over planet and hit that button and it will align channels as part of sharpening/processing workflow (I took your image and set all wavelets to 0 to avoid additional image sharpening and only aligned channels - but it is best to do it on 16 bit "raw" image as part of processing).
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.