Jump to content

vlaiv

Members
  • Posts

    13,263
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. As you say yourself - RA performance depends on where the mount is pointing. RA error will be largest near equator - at DEC values close to 0°. Once you start moving "upper" to DEC values towards 90° - your RA error will stay the same in terms of arc seconds but it will not be the same in term of pixels on sensor. That makes sense if you think about it - if you point the scope at Polaris - you won't really be tracking as Polaris stays put - you'll be rotating your FOV at very slow pace - one full rotation in 24h. For this reason it is best to calibrate your PHD at equator - close to DEC 0° - regardless where in the sky you'll be guiding. Second thing that I'd like to point out would be that if you have 0.3" / 0.5", 0.4" / 0.4" and 0.5" / 0.3" RA and DEC guide values and you can notice star elongation in your images - well your guide setup is not measuring things correctly. You are probably using very coarse guide setup - like very small guide scope with short focal length. Are you sure you entered guider parameters correctly in PHD2 (focal length and pixel size)? Say you have 0.3" vs 0.5" error, you use 6" scope and have good seeing conditions of 1.5" FWHM. That is 12% elongation or 1.12 (one axis is x1.12 "longer" than other). Here is what that looks like: Do your stars look elongated like that or more? With smaller aperture and worse seeing - they will be even more round as guiding performance will contribute less.
  2. Indeed, that is excellent image! How about this scope then: https://www.firstlightoptics.com/william-optics/william-optics-zenithstar-61-ii-apo.html or the same think in different branding (which ever you prefer, I think FLO had it listed for a while as well, but I can't seem to find it now): https://www.teleskop-express.de/shop/product_info.php/info/p10095_TS-Optics-PhotoLine-60-mm-f-6-FPL53-Apo---2--R-P-Focuser---RED-Line.html
  3. Great idea! Yes, we are probably just a tad off topic here - but yes, 3D printer and some clever mechanism to be able to slide EP left and right would help with keeping star image on axis for telescope but off axis for eyepiece. Think of EP revolver except not moving in circle but rather left / right. A bit like this filter slider:
  4. Yes, I would check flattener distance as corners don't look very good. That really depends on the way you do LRGB combine. Since you did PI method for that - well, that is what it is producing. Yes, indeed - it looks like there is something causing (very faint) diffraction. It can be either outside telescope or inside. Outside - any power lines or anything similar in direction of your target? Internal - could be as simple as very small hair on sensor or flattener. Difference would be visible on other stars in the image - if they all have it - it is probably external to telescope or at least somewhere near the front lens. If only few stars suffer from it - it is close to sensor and in that general area. That is light pollution gradient. Flats won't deal with it and you need to remove it. What is your luminance like in terms of signal in general? If you say that red and luminance have enough signal - then RGB combine method is not very good and image should have more signal and be more red in nebulosity in the background. Did you refocus on each filter? There are a lot of red halos around stars and that is ok with ED doublet if you keep single focus position. Benefit of doing LRGB is that you can refocus on each filter and reduce this effect. L will still have it unless you invest in L3 from Astronomik. You should combine RGB data first and then perform stretch. In fact - you should stretch L and then apply RGB linear values to that stretched L if you want to maintain RGB ratios.
  5. Interesting - so very much like the approach I suggested above - except he is using telescope + EP + eyeball instead of camera lens + camera (in principle one can also use telescope + camera as well).
  6. Indeed - both scope and lens will contribute, but these can be minimized with careful selection. Long focal length is the key for lens and large aperture for telescope. Long focal length of lens means that lens aberrations will be small compared to eyepiece aberrations, and large aperture of telescope will make airy pattern small compared to eyepiece aberrations.
  7. @globular Do you know how are these values measured in the table? I think I know fairly simple method for measuring same thing. It does involve quite a bit of gear to be used - but it's not expensive and most astrophotographers should already have some or all of it. 1. One artificial star 2. Camera and fast sharp relatively long FL lens. Many people have Samyang 135 - but any other lens that is sharp at say F/4-F/8 in similar FL will do. I borrowed the idea from @Louis D who did wonderful set of comparison photos thru the eyepiece. Only difference is that I would advocate using large lens instead of phone camera because phone camera can act as aperture stop and reduce aberrations created by eyepiece. That way one can actually take a photo of star in center, mid field and at the edge of different EP / Scope combinations and post these instead of numbers. That way we can actually see what the star image looks like.
  8. I think that general consensus is that it is a good idea
  9. I'm talking about AFOV and not TFOV. AFOV is what we see with our own eyes irrespective of telescope and magnification used. Moon is 30' in size when we view it with naked eye (no magnification). You can clearly resolve features on the moon and moon is quite "big" in the sky - far away from anything resembling point source. If I were to see 1° "star" at the edge of the field - I would be very surprised. That is twice diameter of full moon in the sky. Here is simple experiment for you - take five pence and put it 1m away from you. Look at the size of it. According to above linked table - that is the size of star at the edge of FOV in Baader Hyperion Zoom at 16mm in F/4 scope. Can you see that being true or would you say that such assessment is somehow flawed?
  10. Would you care to comment apparent issues with those results? To quote parts of that post that are important: For comparison - naked eye moon is 30 arc minutes. Average person can't resolve below 1 arc minute. How does one see image as being perfect if spot is less than 10 angular minutes? That is say 9 minutes of arc, or about 1/3rd of the full moon. That is considerable dot and not pin point star. Star needs to be 1-2 minutes of arc in size so that we say it is point like. Some of results are shown as 60+ minutes of arc in that table - that is 1° or more. That is one's fingernail width at arms length or size of Jupiter at x80 magnification. I would be seriously surprised to see star that big in any eyepiece. If we assume that there is error in units for some reason and that we are talking about arc seconds instead of arc minutes - well, again things don't add up as most of spot diagrams would be up to 1 minute of arc or just above that - and that is point like so all eyepieces would be essentially perfect.
  11. I found that I get the cleanest darks if I take camera off, put cap on it and place it "face / sensor down" on wooden table .
  12. First two choices are the same, except second comes ready made with cell for mounting on telescope objective (which is handy if you don't want to do it yourself). Baader Solar foil comes in two "flavors" - ND5 which is good for visual and ND3.8. First one is suited for visual, while second only for imaging as it passes more light which helps to shorten exposure times. It is very good choice for viewing and imaging. White light solar imaging is no different to any other type of planetary lucky imaging. You take large number of very short exposures (in milliseconds) and select best frames and stack them with special stacking software (see AutoStakkert!3).
  13. Why do you have different darks for red, blue and green? Darks don't depend on selected filter - they depend only on exposure length (and other parameters - like gain, offset and set temperature). It is customary to have same exposure length for R, G and B (and often for L as well) as it simplifies things. If you are taking darks on your scope and using filter wheel and for some reason changing filters for each set of darks (which btw are not supposed to have any light signal so using filters is sort of moot) - then I suspect you have light leak. Most probably IR light leak as IR can pass thru materials like plastic.
  14. I think that there is general tendency to shift images to blue part of spectrum. Most of the stars are simply yellow or orange. Such images look both dull and also they look "imbalanced". I think that most people have issue with star color because when they see daytime image that is taken in warm light - like candle light or 2700K incandescent light and overall tone is still yellow / orange - we automatically think that image needs to be color balanced. If you compare these two images - most of us will say that left is "properly" color balanced. Our brain tends to do the same when sees predominantly yellow / orange stars in the image - we "instinctively" think that image should be sort of "neutral grey" (grey world hypothesis - basis for one type of auto white balance algorithm).
  15. I'm sorry to have confused you. Here is what I said: Here you can clearly read that I recommend exposure time set to ~5ms (not in minutes or tens of seconds as you say I did). You can also see that I said - shoot 30000 frames, and to go for 3 minute video instead of 30s one. You responded to my statement with: and I must say that I'm guilty of not properly reading your reply. I assumed that you meant 3 minute video is too long and not 3 minute sub (it really does not make sense to talk about 3 minute subs when we are clearly using millisecond order of magnitude exposures - but it does make sense to talk about 3 minute video vs 30s or 45s one). I also apologize if my reply containing math explanation sounded condescending. I really did not mean to sound like that - I just wanted to point out that with a bit of math we can estimate size of blur - which combined with the way stacking works - can be used to judge proper video duration. If we used regular stacking without alignment points - then motion blur due to rotation would become issue after calculated amount of time. Thing is - we use special planetary stacking software as each sub is geometrically distorted as well as blurred due to seeing. This geometric distortion happens because of tilt component of seeing wavefront deformation (which is different for every part of the image). It is this geometric distortion and its correction that helps us with rotation issues. Single frame is way too short for any measurable rotation to happen - but distortion shift can be larger than total rotation in 3-4 minutes - yet software deals with it.
  16. It is possible - but not something that you want to necessarily do. Jupiter rotates and if you stack too long video you'll get motion blur due to planet rotation. AS!3 deals with some level of rotation so it can stack 3-4 times longer videos than would be otherwise possible, but to do what you suggest - you'd need to use another piece of software called WinJUPos. You can then take your 1 or 2 minute videos and create image out of each of those, then load it to WinJUPos and "derotate" each successive image to match the first one. After that you can stack those images together - to produce something equivalent to stack from long capture.
  17. Not sure I would go with wide field low power eyepieces on F/10 scope? There is limit imposed by field stop and wide field implies that focal length is short - and exit pupil goes down as a result. 24mm 68° and 32mm 50° plossl will give you same TFOV. Plossl will work just fine at F/10 with no aberrations. First will have 2.4mm exit pupil and second will have 3.2mm. I think that 3.2mm EP will be better than 2.4mm EP on low power view. Am I being silly for thinking that?
  18. PIPP is not replacement to for SharpCap - but rather complementing software and you should get to know it eventually. It enables you to split / join / calibrate your video captures. It has a lot of features like ROI / stabilization of recording both surface and COG (not needed with AS!3 as it does the same) and frame selector ... As soon as you add your recording to it (simple as drag and drop) it will show you general info on it:
  19. Yes, that is strange, however, software like PIPP should be able to give you both things - number of frames and total duration. SER file format records time stamp for each sub (it is actually optional if I remember correctly - but most software will add that info) - and duration is simply = (last timestamp - first timestamp)
  20. I did not recommend 3 minute sub - I recommended 3 minute of subs (or total duration of video). But you are absolutely right - there is simple way to asses if one's workflow produces rotation blur or not and at which point - just shoot say 5 minutes of video and then use PIPP to create 30s, 1 minute, 2 minute, ... up to whole 5 minute video (PIPP can export first N frames without processing - effectively "cutting" the video to length).
  21. Or you could simply calculate it yourself by frames_captured / total_duration
  22. That star is strongest in the image and it clips. As such - it will easily get white color without any processing (max, max, max). Any color correction that comes after that is likely to boost blue more over other two components and that will produce bluish looking star.
  23. I can't really tell what is what on your image above. Is there quark combo in above optical train? Do you remove extensions when you put in diagonal?
  24. Have 6.7mm and 11mm. Both are excellent. 11mm ES82 is my sharpest eyepiece.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.