Jump to content

vlaiv

Members
  • Posts

    13,263
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. Pardon my ignorance, but what is the difference?
  2. It is derived from two things: https://en.wikipedia.org/wiki/Spatial_cutoff_frequency and https://en.wikipedia.org/wiki/Nyquist–Shannon_sampling_theorem In a nut shell - optical system acts as a low pass filter (blurs the image) and has a cutoff frequency, and sampling theorem states that you can perfectly reconstruct band limited signal if you sample it at twice highest frequency component.
  3. I think you missed by order of magnitude Above chart says that Home Modded EQ5 drive is able to be guided at 1.8" RMS while your PHD2 reported 0.4" RMS - which is about x4 lower RMS. Even DarkFrame tuned version does not guide as low as 0.4" RMS - 0.6" RMS is quoted. In any case - your guiding is good even if it is not measured quite accurately.
  4. What was duration of your exposure? 10 minutes is 600s and for 30000 subs out of 600s - that is 50fps or 20ms duration of single sub. You should really be talking about at last x4 as much subs (150-200fps) and 2-3ms exposure length on the moon
  5. It doubles that of 2.37"px so it is 4.74"/px. That is good guide resolution for your mount - EQ5 and that mount gives good guiding results even if guide readings might not be as precise. I think that you'll be just fine as is.
  6. Something is wrong with that graph then. 1.55µm pixel size with 135mm FL guide scope gives guide resolution of 2.37"/px If you look at your stats, 0.08px = 0.40" (I can't really see if it is ' or " - but both are very strange values). 2.37"/px * 0.08px = 0.1896" and not 0.4". Maybe you used binning for your guide camera? What numbers did you enter for focal length of guide scope and pixel size in PHD2 settings?
  7. It does not work quite like that. When we say that seeing is bad - that means that integrated seeing is bad. One that is sum of individual distortions over period of time - usually 2 seconds if we talk about FWHM - or 30ms if we talk about visual seeing. With lucky imaging, we use exposures that freeze that seeing, and meaning of "bad seeing" is a bit different than in regular imaging or visual. It is related to percent of subs we can keep. Good seeing means that we can keep say 20% or even 50% of frames from our recording. Average seeing means that we get to keep maybe 5% or 10%. In very poor seeing - we get to keep maybe 1-2%. That does not mean that we can't take image, and that image will be much sharper than seeing suggests (like in general with lucky imaging - where we produce stunning results even in average seeing) - it just means that we won't be able to sharpen the image as much as we like because SNR will not be good enough when stacking 1-2% of subs.
  8. Here we are discussing Combo quark - version without integrated telecentric lens - hence the talk about using additional telecentric lens. Thing with Ha etalons is that ideally require collimated light beam - or at least very small incidence angles. Most work well with F/20-F/30 beams. That is why original quark has x4.3 telecentric lens and is recommended for F/7 scopes as 7 * 4.3 = 30.1 or ~ F/30. Combo quark is made for F/15-F/20 scopes, but one can experiment with different speeds to see what results they get. One of ways you can change F/ratio of the scope is by adding telecentric (barlow like) element - that changes focal length. Another way to do it is by adding aperture stop and changing F/ratio of the telescope. As far as imaging goes - we are not concerned about seeing as lucky imaging approach is utilized - it uses very short exposures - just few milliseconds and that freezes the seeing, then poor frames are discarded and best kept and stacked. With this type of imaging you want to optimize F/ratio with respect to pixel size used and wavelength imaged - in this case it is 656nm of Ha light. Mind you - telecentric lens needs to be placed in front of etalon and not behind (if etalon is in converging beam - for front mounted etalons, no telecentric element is needed as light already arrives collimated).
  9. 0.49" total RMS is very good value - but I'm afraid - it is not realistic. 0.1s guide exposure is very short guide exposure - most people guide with 1s-2s exposures and those with smoother mounts often use 3s-4s or even longer guide exposures. Your guider does not have sufficient resolution to measure 0.5" RMS with precision. What is your guide camera? 135mm with usual pixel sizes will give you about 6"/px - and that is too coarse to be able to measure below 1" RMS with precision. Having said all of that, even if reported figures are not quite correct - I think that you have very good guiding results for EQ5 class mount. What are your images like? Stars should be round and tight with this guiding results (provided that you image at good resolution - like 1.5"-2"/px).
  10. You can do even simpler thing that does the same - just make a capture of color checker and measure gray patches - any of them for value of RGB. It should give you equal values of R, G and B. It won't on raw image - but it will give you reciprocal of weights (you can divide R with measured R value, B with measured B and G with measured G). If you fix G to be 1 - you can calculate R and B weights as well.
  11. When you take still with your DSLR - you set exposure to some value - say 1/60 or 1/30 - that is 1/60th of a second or 1/30th of a second. I'm fairly confident that when you are taking a video with DSLR you can also adjust exposure length. It is easy to calculate how much it needs to be in fraction - if you just notice that one second has 1000 milliseconds. 1/60th of a second is then 1000 ms / 60 = 16.67ms, and similarly 1/30th is 1000ms / 30 = 33.33ms. If you want to know "fraction" value of 5ms or 6ms - it goes the same 1000ms / 5ms = 200, so 1/200th is equal to 5ms. Similarly - 1/166 is equal to 6ms. You may notice that this is equal to max FPS that you can theoretically produce, so if you set exposure length to 1/60th - you can produce at most 60fps and for 1/30th - 30fps. Proper lucky type planetary imaging done with dedicated planetary cameras uses 5-6ms exposures and hundred of more FPS. With DSLR you'll probably be limited to said 50FPS, but never the less - go with short exposure like 1/200th (which is 5ms).
  12. Hi and welcome to SGL. One has nothing to do with other. Resolution - or sampling rate is dictated by pixel size and focal length. You can use barlow to modify focal length of the telescope. There is something called critical sampling rate and you should not go over it as there is no point - telescope aperture can't resolve more detail than that. As a rough guideline - your F/ratio should be x4 pixel size in µm. Exposure length should be as low as 5ms regardless of everything else. This is to freeze the seeing. Longer exposure lengths allow for seeing to change during single frame exposure and additional "motion blur" forms because of that. At about 5-6ms you "freeze" that motion and only distortion is recorded and not motion blur due to change in that distortion. This means that fps that you want to achieve is over 150fps. If your question is only about DSLR usage and two different modes, then I would choose higher FPS and if possible - raw video format. DSLRs tend to shoot compressed video. Avoid that as it introduces artifacts in your image (like poor quality jpeg - image becomes blocky). Find out about pixel size of your camera and then use barlow to get good F/ratio. In most cases even with proper F/ratio - planets are small and even 640x480 format is enough to capture them, so don't worry about that part.
  13. What gear are you using to make recording? These artifacts are because of compression in video. In order to make clear image you need to record raw uncompressed video instead.
  14. Hi and welcome to SGL. Can't help with wanted location, but do want to point out that image of Milky Way is not going to represent what you'll see under dark skies. These images are taken with long exposures and processed in software. I'm not saying this to discourage you - just to help you prepare and set your expectations right.
  15. No, you should be fine with those eyepieces. 76mm f/7.5 has 570mm of focal length. With x4 powermate that will be 2280mm of effective FL. With 26mm EP you'll get ~ x87.7. That is well within range of 76mm scope. You need to check few things, in order to see what might be going on with x4 powermate: 1. Order of elements You need to place telecentric lens before your quark, so order should be: Scope, some sort of IR/UV filter (you don't need that if you use front ERF) - it can be simple UV/IR cut filter or maybe ~30mm Ha filter for night use. Next is telecentric lens and then quark and eyepiece. 2. Optimum working distance for x4 powermate. Although telecentric lens should not vary magnification with distance - they sometimes do. Since you can have significant optical distance between powermate and eyepiece - magnification factor can change. According to this chart, x4 powermate does not change significantly (up to x4.5 at 100mm separation where best position is around 25-30mm). 3. Focus position of your telescope. Barlow elements shift focus further out, but powermate does not. This means that you could be running out of backfocus with all those elements in optical train. If you are using diagonal - try loosing that and seeing if you can reach focus that way. Also, where is best focus position that you can achieve? Is it at point where your focuser is fully racked in? If so - you might need a bit more "in travel" than you have with that scope. Yes, aperture mask will reduce visible detail, however, it works fine if you want to observe at low magnifications - like full disk viewing. If 40mm or 50mm dedicated solar scopes show nice image of full disk - same will be true for 40 or 50mm aperture mask and quark combo. Critical sampling optimum F/ratio depends on wavelength and pixel size. In regular planetary imaging you have a choice since you are imaging whole range of wavelengths (between 400 and 700nm) - and you can aim for particular part of spectrum (I usually advise to go for 500nm in that case) - but here there is single wavelength - 656nm and you should aim for that. Optimum F/ratio is F/17.9, that means you want F/6 scope and x3 telecentric lens for optimal working conditions with ASI174mm camera. F/15 that you used is not far from that, and for the time being - use that. Using x4 telecentric lens will give too high sampling rate and you'll be oversampling. Not sure if you want to do that (SNR loss due to oversampling).
  16. Very close. Not sure how much impact will it have on the final image: I feel that upper side is maybe 1-2px thinner than bottom side, so just a minor tweak is needed. Left-right seems spot on.
  17. Not sure, but here is interesting article: https://physicstoday.scitation.org/do/10.1063/PT.6.1.20190726a/full/ I wonder if energy bleed into gravitational waves could be confirmed by amateur telescopes? That should be confirmation of GR.
  18. Some of these might be within reach of very large dobsonian telescope: https://research.ast.cam.ac.uk/lensedquasars/ and are certainly within grasp of even moderate size imaging scope. Lucky type imaging / special processing might be needed to resolve 2/4 components at those scales. I've seen people image / resolve M87 relativistic jet, but I'm not sure if relativistic effects can be measured from amateur images https://en.wikipedia.org/wiki/Astrophysical_jet
  19. Not sure that is good idea - at least not without full ERF at the front of the scope. What eyepiece have you used with x4 telecentric lens? If you think that you have "too much magnification" - use longer FL eyepiece, like 20mm or 25mm plossl? Maybe it just won't reach focus properly because of focus shift? If you want to check effect of F/ratio on the view and you don't have x4 telecentric lens (you can't get x4 powermate to work for any reason) - use x2 ES focal extender and aperture mask. You have 80mm F/6 frac. With 40mm aperture mask, you'll turn that into F/12 instrument and then with x2 ES focal extender into F/24. That should be good for combo and still allow you to see full disk (focal length will be only 960mm). Aperture mask size in combination with x2 ES will create range of F/ratios - so you can choose which one you find the best - F/15, F/18, F/20 or F/24 for example.
  20. I had ST102 fall from about 1m onto a floor (hard surface / tiled with ceramic tiles). Scope partially hit the floor and partially my foot as I intentionally put my foot under it trying to soften the fall. One of scope rings broke into pieces - very strange the way it shattered - it revealed low quality of material used to cast it (probably leftovers from some industrial process). Tube suffered a minor dent and scratch. Lens was perfectly fine and scope performed the same as before the incident - collimation was perfectly fine after that.
  21. Not sure what is correct color, but how about this: Or maybe this - a bit more saturated and warmer version:
  22. Had 60mm F/4 guide scope and eventually sold it in favor of OAG. It costed about the same - maybe even a bit less OAG and I now exclusively use that for main imaging scopes. I still have one tiny 32mm guide scope to use with lens and AzGTI in EQ mode - simply because OAG can't fit in that combination. It is important to do a bit of calculations for OAG. Prisms are rather small and when people use them with fast scopes - this can lead to OAG prism being aperture stop for guide camera. Prism is 8mm and that means one needs to place OAG fairly close on fast systems. Say you image with F/6 scope - 8mm x 6 = 48mm. Prism needs to be closer than 48mm to imaging sensor in order to avoid it being aperture stop for guide sensor. Another thing is guide resolution - in most cases you don't need that much resolution as OAG provides and you can safely bin x2 to improve SNR on guide star.
  23. That baffled me for a bit as well, but then I did "litmus test" on the image above and it matches theory very well. Where color bar is taken from here: https://en.wikipedia.org/wiki/Stellar_classification I also matched several stars to this scale and checked their stellar class and was able to guess their temperature correctly by just looking at the image. I guess that Hubble team emphasized blue data as it shows star forming regions - young hot stars are much more frequent in these star forming regions - and this happens in spiral arms because of gravity and rotation of galaxy - there are gravity waves that form those arms and these disturb gas and dust that then collapses and forms new hot stars. It is still present in image above but it is not emphasized as much - it is depicted in "natural" intensity rather than boosted to be shown. On the other hand - Ha is boosted to be shown as I did not color manage it - I just blended it in to the final color managed image. It is there due to separate Ha data that was taken. I did my stretch on luminance data alone and used RGB to produce color for it. I did basic levels stretch in Gimp and initial stretch was even stronger than in above image. Only when I added color did I notice I went too much and then I backed off a bit. Here is original luminance data stretched (strong stretch version): This is just levels stretch and I used middle and "bottom" (left) sliders to produce this image - I did not touch right one as core of M81 was close to saturation point anyway. I used selective sharpening and noise reduction (sharpening in high SNR areas and noise reduction in low SNR areas) - this was done with layer masks where mask was original layer (stretched differently than original image). Simple stretch and no masks for stars or anything - only levels and stars were controlled ok because data is good.
  24. In the mean time, let me share my processing of IKO M81/M82 data. I'll do it here because I don't think I should be included in competition due to this thread (hopefully I'm not braking any rules by doing this ). This is not fully color corrected processing workflow, and Ha was not handled the way it should be handled (I mixed it in later not at XYZ stage). I just did basic channel balancing since I don't have CCM for camera and filters used to obtain the data. Some colors might be slightly off because of that. I did not do any perceptual adjustments either (tried Rawtherapee as it has CAM02 adjustments - but I simply can't figure out how it works - or rather how to adjust it for what I believe are proper parameters in this case). Color management was completely done in math - I did not do single color adjustment in processing software, except for Ha data blend in at the end - that was done in Gimp (I had no idea what I was doing, but I did manage to blend it in satisfactory).
  25. What camera do you intend to calibrate? Can you mount it on a simple lens or is telescope only option? What sort of screen will you use for calibration? It should be calibrated for sRGB / D65. Here is what you need to get you started. In fact, we can do this step by step together - I can make color calibration of regular image to show how it works in daytime photo and you can do that with your camera? 1. Get image that displays distinct range of colors in form of uniform squares or rectangles - they need to be easy to image and measure mean value across channels. We can use this resource: https://www.babelcolor.com/colorchecker-2.htm#CCP2_images or we can make our own calibration image 2. Have your calibration screen show calibration image in dark room (screen should be only significant light source) and take image of it with your camera. You'll need either OSC raw data or three filters - R, G and B 3. Take calibration image and open it in software that will do measurements of pixel values (average over some surface), for each color measure values of R, G and B and then use this calculator: http://www.brucelindbloom.com/index.html?ColorCalculator.html White reference should be set to D65, Gamma to 2.2 and RGB model to sRGB. If you measure values in 0-255 range, check Scale RGB option (otherwise leave unchecked for 0-1 range). When you enter measured RGB values and click RGB button - you should get XYZ values calculated. Above I calculated (normalized - Y=1) XYZ values for RGB = 1,1,1 - white color in sRGB space which is - white reference for that space - D65 and you can see XY chromaticity coordinates being 0.313 and 0.329 (second row) - which checks out if we ask google for verification: 4. After you've written down all XYZ coordinates of your color patches on calibration image - measure actual image that you recorded with camera - raw_r, raw_g and raw_b. Just measure average ADU values - don't use calculator for this step. 5. Note both sets of coordinates in spreadsheet software - XYZ calculated from measured RGB values from original calibration image and RAW rgb values measured from recording of calibration image.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.