Jump to content

vlaiv

Members
  • Posts

    13,265
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. @Stu According to this thread: https://www.cloudynights.com/topic/544060-spherical-abberation-correction-and-diagonal-prism-size/ over at CN - it looks like spherical itself is not very important - it is spherochromatism that changes with use of prism diagonal - because it is glass and affects different wavelengths differently.
  2. https://www.teleskop-express.de/shop/product_info.php/info/p3881_TS-Optics-PHOTOLINE-80mm-f-6-FPL53-Triplet-APO---2-5--RAP-Focuser.html + https://www.teleskop-express.de/shop/product_info.php/info/p11122_Riccardi-0-75x-APO-Reducer-and-Flattener-with-M63x1-Thread.html
  3. For mirror it has point, but for prism - there is no point. Total internal reflection is responsible for - well reflection in prism. There is no reflective layer of any sort - just interplay of angles and refractive indices. Light never even reaches silvered side of prism for it to have any effect. If there was any effect - we could take black paint (or cloth not to be destructive) and cover reflective side of prism and it would "loose" its power (at least to a degree) - but that does not happen. If you take prism and put it on black surface - it will still reflect perfectly fine - as long as angles are right.
  4. I think I remember reading something like that - but I have no idea how might it work. I will look to see if I can find anything on spherical aberration introduced by prism. It needs to add either positive or negative spherical - which would then add up or cancel with any spherical that scope has.
  5. I'm not buying that It certainly is not Sitall prism as Baader themselves say it is Bak4 glass used in their prism. Wiki article on prism star diagonal states: "Also a prism will never degrade over time as a mirror will since there is no reflective metal coating to degrade from oxidation." Indeed - reflection in prism is achieved by https://en.wikipedia.org/wiki/Total_internal_reflection mechanism - it works on water as well, not just glass, so I don't see how it can degrade unless actual surface is scratched / destroyed
  6. Is BBHS Baader prism diagonal just marketing trick? Prism diagonals don't really need silver coated surfaces to do their thing, so why put expensive BBHS (and it must be expensive because mirror treated that way is more expensive than others) coating if it does nothing?
  7. That should not depend on camera used. Most cameras should be able to achieve proper color in image if data is processed properly.
  8. Yes, there is simple formula: found here: https://en.wikipedia.org/wiki/Spatial_cutoff_frequency You need to sample at twice cutoff frequency, so actual expression for F/ratio is: F/ratio = 2 / f * lambda = 2 * pixel size / lambda (as pixel size is wavelength or 1/frequency). If we take in above case F/ratio = 2 * 2.4µm / 0.51µm = ~9.412 = F/9.4 (0.51µm = 510nm) You can also use other wavelength values for light. ~500 is good for broad band imaging (in theory 400nm is shortest wavelength that you'll capture, but short wavelengths are more prone to seeing effects and it is better to go with 500-510 for calculation), but if you shoot thru Ha filter for example - you would put 0.656µm or 656nm as lambda / wavelength of light.
  9. Actually no - it is quite close to optimum sampling rate for ASI178. ASI178mm has 2.4µm pixel size and at 510nm, optimum F/ratio for that pixel size is F/9.4, while mid spectrum at 550nm, optimum is F/8.7
  10. I would look into iOptron offerings, but those that are in Heq5/Eq6 range tend to be more expensive. Skywatcher models are really affordable for what they offer and if you base your budget on them - it is unlikely that you will find better match (at that price point). I would still consider going iOptron way. I have now Heq5 mount, but looking into getting CEM120 (or possibly Mesu 200 next). In my experience, SW mounts can be made to work good but it requires a lot of fiddling. Mechanical finish of these mounts is very rudimentary. I have sense (and also what I gather from feedback on internet) - iOptron is much better in that regard. There are couple of features with iOptron mounts that make them better in my opinion. I don't care much for iGuider / iPolar / EC and all that iFancy stuff, but I do like that they have magnetic floating worm which is rather good solution for backlash. They also have belt drive (something that is after market addition on SW mounts - except for EQ6-R and AZEQ6 models). Stepper motor resolution is good (better than SW mounts).
  11. It is in same ball park as ASI1600. ASI1600 has some advantages but also some drawbacks. Advantages would be - shorter subs required (much lower read noise), much faster readout speed, and ability to bin in software and small pixels (which gives flexibility in terms of sampling rate used). Disadvantages of ASI1600 are that it has software binning instead of hardware (but this is really offset by low read noise) and issues with micro lens diffraction artifacts. I'd say that for beginner, KAF8300 is very good sensor - provided that they know how to utilize it best (proper calibration and long exposures to offset for high read noise and slow readout - which means good guiding / good mount).
  12. Yes it would. It does not need to be of the same exposure, but signal does need to be "compatible" - so scaled properly, before math is performed. Also, flats must be scaled to 1 in order not to change ADU values. PixInsight for example is a bit problematic as it tends to scale all values in 0-1 range. Both at import time (division with 65535) and after pixel math. Above math is best done in software that keeps ADU values as they are - like ImageJ or alike.
  13. 1. Mesu200 mount 2. RC type large aperture scope - like 12" RC 3. Full frame CMOS type camera Although you have excellent budget - I would actually exceed it with my "dream" combination - which is the same as above - except 16" RC and mono full frame CMOS camera - ASI6200 + 2" Filters and filter wheel and OAG.
  14. Much more information is carried by luminance than color. We can do simple simulation to demonstrate the effect. Here is baseline image - with very fine detail: I'm going to split it now to luminance and chrominance and will apply gaussian blur to both parts (separately). We can then examine results: Here is luminance component blurred by 3.0px gaussian blur: And here is chrominance components blurred by 3.0px gaussian blur: I think that results are self explanatory and demonstrate one very important aspect of Mono + filters - you can dedicate much more time to luminance to get detail and shoot color for much shorter time as noise in color data will be much less obvious (and of course, you can bin color data to improve SNR without loss of detail in final image - provided that you respect certain workflow and use luminance as actual luminance).
  15. That was my first thought, but that can be easily checked. Can you filter by type - like CCD / CMOS? Or maybe check models being offered manually (I know it is a bit of work, but in the interest of science? ).
  16. Same way as you do for any imaging. Only difference is format. I do SER movie for everything on planetary. Do about 100 to 200 flat frames in SER - with flat panel. Do the same for flat darks. When you finish imaging run - do darks by covering the scope and using the same settings you used to do the recording. In the end - just load them in PIPP in their respective tab - simple drag&drop of ser file works I think.
  17. Take flats. I think that it is good practice to do full calibration on video as is with any imaging. Darks, flats and flat darks. PIPP will do the rest for you.
  18. Have a look at something like this: https://www.myastroscience.com/dslrcoolerbox I think that for DSLR - best option is to put it in a cooled box rather than just adding peltier to camera itself. This is because of dewing / icing issues that can arise if you leave cold surface exposed to moist air. Dedicated astro cameras have sensor chamber that often has heated window - or at least desiccant tablets or similar to remove moisture.
  19. There is a link to similar product that I posted, and by some accounts - a better one (more precise). It is used by placing it on computer screen in certain way and then software is run (you get license with hardware bit) and it tells you how to adjust your computer screen and creates color profile for accurate color reproduction. Have a look at this video: https://www.youtube.com/watch?v=xwI61SR-ua0
  20. https://www.bhphotovideo.com/c/product/1506566-REG/x_rite_eodisstu_i1display_studio.html
  21. Ok, here it goes. We need three components of color to describe it precisely enough, right - R, G and B. Now for a bit of math - We can see that as being a vector, so any linear transform of that vector that has inverse will maintain that information, right? (it needs to be linear as light adds linearly) We might decide to record color as being (Q, W, E) with each of Q, W and E being: Q = R W = G E = R+G+B For example. It can be easily shown that we can reconstruct R, G and B from Q, W and E by using following equations: R = Q G = W B = E - Q - W Now watch following graph: You can see that lum is equal to R + G + B to the first approximation. To the first approximation, you can take L, R and G and form RGB triplet by simply doing R, G and L-R-G. We can use better approximation if we actually color calibrate our camera + filters (as we should do anyway even if we use RGB filters). That means shooting known colors and deriving transform matrix. (raw_R, raw_G, raw_B) * matrix = r, g, b In this case, we will simply do following: (raw_R, raw_G, raw_L) * matrix = r, g, b for appropriate matrix (which will be different than in above raw_r, raw_g and raw_b case - but derived the same way). That will give more correct color information than simple subtraction above, but to first approximation - above will work. You can try it out, take any of your old data sets and simply discard blue channel. Then use pixel math to derive blue channel by subtracting G and R from luminance. Just make sure that they are properly scaled if you used different exposure length for each. Why ditch B and not G or R? Several reasons - cameras are usually less sensitive in short wavelengths (but this does depend on camera model), blue is more scattered in atmosphere - atmospheric extinction is higher in shorter wavelengths and blue signal is therefore weaker. Converted into photons - there is less "blue" photons than "red" photons for same energy. Shorter wavelengths are more energetic and that means that fewer photons are needed for same total energy output. Fewer photons - weaker signal (in photons) - poorer SNR in blue part of spectrum for same exposure time. Makes sense?
  22. Even with individual NB filters - you can still use tri band filter as luminance and shoot all four but with different time ratios - say spend 3h with tri-band and 20 minutes with Ha and OIII (you don't even need SII in that case). Ha and OIII will be only for false color composition while tri-band is luminance.
  23. Interestingly enough - dual and tri-band filters are better suited for use with Mono cameras - yet not much people use them that way. They are in fact equivalent of Lum filter for NB imaging. People also don't use mono + filters in most effective way to really show biggest difference between mono + OSC. They shoot LRGB instead of LRG. Later requires somewhat special processing - but not different processing to what should also be done with LRGB and OSC data alike.
  24. I think that most discrepancy in "useful" magnification comes from differences in visual acuity of observer in question. https://en.wikipedia.org/wiki/Visual_acuity On that page you have comparative scale. 20/20 vision - often quoted as perfect vision corresponds to minimum resolving of 1 arc minute. If you have two white lines separated by black line - you'll be able to tell it is two white lines if they are separated by 1 arc minute with 20/20 vision. You'll be able to resolve them. In any case - it is better to talk about minimum useful magnification rather than maximum useful magnification. How much do you need to magnify image to be able to see all there is to be seen - rather than how much you can magnify image in general. You can magnify as much as you like - and not sure if useful is something that I would put next to max magnification.
  25. Yes, from higher to second orbital. Going to ground state is Lyman series but that is all in UV part of spectrum.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.