Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

vlaiv

Members
  • Posts

    13,032
  • Joined

  • Last visited

  • Days Won

    11

Everything posted by vlaiv

  1. No, I did not use collimator at all. I found this very informative blog post (let me see if I can find a link) that gave interesting instructions for collimating RC. https://deepspaceplace.com/gso8rccollimate.php I adopted it a bit for my gear. I did it once with Bahtinov mask, but found that it is not as precise as I would like it to be, so I switched to SharpCap and measuring of FWHM for defocus measurement. I did all as recommended, but when I focus on star in center field then I inspect all corners for FWHM and pick the one that has the biggest value to adjust. It actually took me two rather short sessions (of about 15-20 minutes) to get it quite good over ASI1600 chip. First one was with Bahtinov mask, I did I think 2 iterations and proceeded with imaging, results were not that good, so on next night I repeated with FWHM and additional 2-3 rounds of tweaking. Here are results: Top left corner prior to any collimation : After first round of collimation: (improvement but still some elongation) After second collimation: or this one:
  2. Honestly, I don't know I did not learn it all in one place, it is a product of reading multiple sources on the web and then simply applying basic logic to it to draw conclusions. Some of it was from personal experience since I've got the same gear. First thing that I've learned related to above is importance of collimation on RC scopes - you don't collimate it well you will end up with stars that don't look nice. You can find formula for reducer magnification (or minification? ) here - http://www.wilmslowastro.com/software/formulae.htm#FR but I've seen it elsewhere on web. It works for barlow calculations as well (very similar formula - barlows have negative focus). On AP website there is a document for CCDT67 technical data: http://www.astro-physics.com/tech_support/accessories/photo/Telecompresssor-techdata.pdf There you can see that FL of this reducer is 305mm, and that housing is such that you need 85mm from thread to be at 101mm for x0.67. TS RC specs can be found on TS website. I've also read on multiple places online that reducers (similar to barlows, but unlike field flatteners) work over the range of distances. Problems with tilt I experienced first hand on different scopes, so that I something I pay attention to.
  3. Although recommended distance is 85mm, I think you will find that your combination of equipment is not going to work well on "default settings". I have same gear (TS RC8", x0.67 reducer - ccd47 variety from TS, and ASI1600). Main problem with this setup is FOV. 8" RC has fully corrected field less than the size of ASI1600. If you have well collimated RC and use ASI1600 at native - you should get round stars up to the edge with slight field curvature showing - stars close to edge will be a bit larger / slightly out of focus. Best to focus about 3/4 from optical axis - if you use Bahtinov mask or measure FWHM when focusing - place star somewhere around 2/3 to 3/4 between frame center and one of the corners. This way you will "spread curvature" over sensor (stars at the edges will be closer to focus, and stars in center will be slightly out of focus, but almost impossible to tell). This is of course in case you will be keeping whole FOV. If cropping out - focus closer to center of FOV. Now, back to star shapes and reducer - With RC8" it is advertised as being able to do 30mm without corrector. I would say that is probably pushing it and corner stars will suffer. ASI1600 has about 22mm diagonal. When you apply x0.67 reduction, you are squeezing larger field on smaller surface. Equivalent to this is using larger sensor. How much larger? 22mm / 0.67 = ~32.8mm. This way you are imaging out side of corrected circle. Edge stars will surely suffer because of this. I would say that going up to 28/29mm imaging circle would be appropriate (even then, prepare for very specific focusing - finding focus place that gives best focus over FOV). This reducer acts as slight field flattener - but not proper one, so field will be a bit flatter but you will still have to deal with field curvature. Now let's see what sort of magnification one should get to cover 28/29mm "imaging circle". Again some math 22/x = 29 -> x = 22/29 = x0.75. If you do a search on web about this combination - RC8 / x0.67 you will find that people often say - best results are not at x0.67 but x0.72 - x0.75 (above is the explanation why it is so, some will even say it works good with "native" x0.67 - but that is simply because they are using small sensor - 11-17mm, for APS-C that is larger than ASI1600 they probably need to go as low as x0.75 or lower). What would be proper distance for x0.75? Go for distance of 55-58mm (correct distance will not be of big importance, just use spacers that you have) . You can actually experiment and start with 55mm and then move reducer further and further until you reach point where things start to fall apart. So to reiterate: 1. Well collimated scope. 2. Check that there is absolutely no tilt in imaging train and focuser is well collimated as well. I had to switch to threaded connection because of slight tilt. 3. Reduce distance to 55mm and work your way up from there - try to find sweet spot (best reduction with the least star distortion) - experiment with focus position for best results. 4. Understand that you introduced another optical element in optical train and that it is not perfect - so star shapes will suffer (just how much depends on optical quality of item). HTH
  4. Nope - round stars - not good indicator. All that tells you is that guide error is randomly distributed between DEC and RA (same RMS in both, and both errors being random). There are two indicators of good guiding, and none is perfectly reliable - guide RMS, and star FWHM. Star FWHM will be subject to seeing, but if seeing is relatively constant (or similar between nights) - star FWHM will be smaller when guiding is good vs when it's not good. Guide RMS is good indicator provided that your guide resolution is sufficient to resolve guide errors and that seeing is not totally poor (not worth imaging, or some serious low level disturbance like chimney exhaust right on optical path). For most part seeing is not causing poor guiding - if you visually observe at high resolution, you will notice that majority of time seeing disturbances are very high frequency - it either blurs target (for example planetary) - this means disturbances are too quick for eye to see as shimmer of you can actually spot waving / shimmering / wobbling (it shows well on Moon) - but you will notice that it is also high frequency - changing position many times per second - such effect averages out in duration of normal guide exposure (2-3 seconds). If you suspect seeing is culprit - just use longer guide exposure - if you have fast changing PE it will worsen RA stats, but it should improve DEC stats/graph (this is how you will know - it will start to look more saw tooth). If you have well behaved mount (no fast PE component - so smooth PE, and not very big) - neither PE nor PA will cause big guide RMS (provided that PA is reasonably good - even if you don't drift align). Most guide errors are caused by mount mechanics surfaces being rough (poor bearings, rough surfaces, mount being loose) - to lesser extent, and wind and other factors (cable snag) to greater extent. You can see this by looking at DEC graph. There is no reason for it to jump around if PA is only reason for DEC to go out of normal position - but DEC tends to jump around pretty much - since DEC is usually not moving when tracking it can only be due to rough mechanic surfaces and external influence like wind. Otherwise it would slowly drift out of position just to be brought back (saw tooth pattern). There is another thing that limits HEQ5/EQ6 class mount in guide precision - that is stepper motor micro step precision. Best you can hope for HEQ5/EQ6 mount, if tuned (belt mod) on windless night of good seeing is around 0.4-0.5" RMS - you simply will not be able to do better with these mounts. On such night you will often see DEC error being a bit larger than RA (if you have your PEC in place and mount is not suffering any short period PE component). It is because RA is in motion, and changes micro step very fast, about 100 times per second or so, and if microstep is missed it will not show in 2-3s guide exposure (mount inertia will make sure such miss steps are smoothed out). DEC on the other hand is in dynamic balance - and stepper micro controllers are known to sometimes miss micro step if under load. So DEC will tend to "oscillate" around position where it needs to be (misses micro step - there is stronger pull to get it back to where it should be - there is small jolt due to air motion, or vibration - it overshoots on other side, guide pulse reacts, sometimes it undershoots, sometimes it overshoots - a sort of random oscillation forms). There is another component that is impacting guiding (if you are using OAG) - how well scope is attached to mount - that includes any slack in tube rings, is there even minute flex between scope and dovetail bar - how well dovetail bar sits on mount. Scope length will play a part in this due to arm momentum. Bottom line - for HEQ5 / EQ6 class mount, best you can hope on a good, still night with tuned mount is around 0.5". A bit of breeze will bump this to 0.6-0.7" or above, depending on scope size, weight and connection to mount. A bit stronger wind and you are looking at 1" or above of guide RMS. For guiding below 0.2" RMS you want premium mount (preferably on solid pier, shielded from wind - then you can go as low as 0.1").
  5. In general I do like it, I love the detail and 3d kind of feel to it. I love color balance and palette. Flame bothers me a bit I must say, composition wise - it somehow detracts from the image - both placing / brightness and color seems to be "out of place" on otherwise superb image. It also looks flat compared to the rest of the image.
  6. Yes, well, given that I suspect them sharing design and optical glass elements, and also there has been another thread on 125mm questioning quality of that scope - for that reason I view them as siblings - somewhat like SW 100ED, and SW120ED. Reviews of both seem to be quite scarce. I've also considered getting 125mm, but like you said it is in different category, both weight and focal length. Aiming for all-rounder, 975mm focal length is not quite suited for widefield, though I have no doubt it would be better scope for planets and DSOs.
  7. Just an update ... After a bit more research (I'm itching to pull the trigger, but cash flow has stalled somewhat, so it's not an easy decision, and there is justification for fifth scope to be made ... ), it turns out that both TS 125mm and TS 102mm have better color correction with respect to SW counter parts - 100ED and 120ED, at least according to two sources: For 102 (Stellarvue Access 102 seems to be exactly the same scope rebranded, even focuser is the same, so Kunming Optical / Sky Rover): https://www.cloudynights.com/topic/578894-stellarvue-access-102-first-impressions/ And for 125 (in french): https://www.webastro.net/forums/topic/161648-lunettes-ts-photoline-12578-ed-ou-1027/ Unfortunately, no data on sharpness is available (except quote of Strehl 0.979 on one item).
  8. Quite right , so my statement "... not energy but rather wavelength" does not make much sense. I was trying to say that it is not only energy that dictates likelihood of interaction. Here I go again writing nonsense There is greater chance of interaction at certain energies than on other? \
  9. I don't think it has to do with individual photons energy, but rather their wavelength. Longer the wavelength, less chance to interact with matter. It is actually mismatch between wavelength of particle and atomic structure of the matter. So both long wavelengths go thru (like radio waves / microwaves), but also x-rays and gamma - very short, so cross section is small. Anyway it can be easily checked. Do google search for "Is plastic IR transparent", and "Is plastic UV transparent". Results should give you some hints which is more likely culprit.
  10. Plastic cover does indeed remove all visible light, but IR might get thru it. Since the mono version does not have IR protection window, it is a good thing to protect it otherwise, like using aluminum foil, or something else that is opaque in IR part of the spectrum. I usually take my darks by placing camera "face down" on wooden desk (covered with plastic cap of course), and have not had issues with dark calibration.
  11. Can't really tell if it is worth it. I do my own testing - Roddier analysis, and so far I've been happy with quality of my optics. I have 8" dob, which showed 0.8 or greater system strehl (so primary is probably quite better than 0.8). It is mass produced Skywatcher variety. Quite happy with the scope. I also did test on RC8" from teleskop express. It is 0.94 or better system strehl. Again really pleased with scope. TS Optics Photoline 80mm F/6 apo shown 0.8 or greater in blue part of spectrum, 0.94 or greater in green and 0.98 in red part of spectrum. With today's technology and manufacturing process, I think that it is very unlikely that you will end up with a bad sample. So I'm fairly confident when purchasing scopes at this time that I'll get quite usable instrument (for that sort of money). I will be able to tell if scope is below usable optical quality and if it should be returned. In my view it is worth ordering additional tests as a means of insurance that you don't need to return scope and don't even want to consider such possibility.
  12. Well it looks like I'm going to have a chance at taking a peek thru your new FC 100 DF (I guess that is new scope? ) if you are planning to come to Messier Marathon on 14th/15th?
  13. Hi, no not yet. Did not get much chance to observe this winter, weather was bad so purchase is postponed until further notice TS photoline 102 F/7 is still main candidate for me, I have 80mm F/6 apo triplet (also photoline) but I use that only for imaging. For visual I wanted something with a bit more light grasp and a bit more FL. I'll probably do sort of review with testing when I purchase scope, so you will get the chance to see my impressions, unless of course, you make your purchase first.
  14. I would say that for 10 bias stack result of 0.011 is pretty close to 0 (if you increase number of bias subs per stack - value should get closer to 0). I also think based on this that ASI224 has a stable bias - meaning that for same settings you will get good bias file each time (difference close to 0 just means that there is no difference in bias signal between two different sessions). So you can use bias for calibration (although it is not really needed). You can do one more test, just select your difference frame, go into Image/Adjust/Brightness contrast, and set it like this: So that min slider and max slider sort of bracket main part of histogram. This will make all features in difference distinct. It should be just noisy with no particular features. In the image above, there is for example visible vertical and horizontal banding - that is something that is characteristic for CMOS sensors, and it shows that read noise is not quite gaussian in distribution, but given enough subs it should not show in final image (both calibration subs and light subs).
  15. You can do that very simply with ImageJ (open source), here is workflow: Choose details if you need to: It will open stack of images, then do: After that: and select: This will create first master. Close opened stack of images, open second stack of images and do same process until you get second master. At the end do: with these options: It will create difference of two masters. Select that image and do: And you should get something like this:
  16. I'm having a bit trouble following your work flow - just because I'm not used to PI screens. If I'm reading screens above correctly you are concerned about image below point 6? Or left part of it - representing difference between two bias stacks? For the test, best thing to do is just to make stack with straight average of subs, so avoid any sort of sigma clip, or cosmetic corrections (hot/cold pixel rejection, whatever). Then do simple arithmetic of subtracting two images. Now image you produced (in similar fashion) - shows something that can be considered as bad - vertical streak. You want your difference to be pure uniform noise, and if you do average value of all pixels in difference you want them to be very close to 0 (it will not be exactly 0 but very very close, something like 0.00000003 or something like that). If you get any other value than very close to 0 then bias is not stable. You can check if darks are stable. Use same approach here - do two stacks of darks at same temperature and of same duration. If these turn out to be ok (as for example is case with ASI1600) then it is all ok, just don't use bias frames. You can get all the functionality that you need, like exact photon/electron count - by using light calibrated by subtracting master dark (composed out of dark frames without subtracting bias) and dividing by flat (normalized flat if you want to do photometry or something).
  17. Collimating RC is not that hard. First thing is to identify what needs collimating. I can help you with some basics, but there are a few videos on youtube that will explain it a bit better. First thing is to see if star elongation is due to wrong collimation or there is something else at play. Inspect your subs, rotate camera then inspect subs again. If in each sub star elongation is pointing in the same direction across the field and it rotates when you rotate your camera - then you might be having a mount / guiding problem. 14" RC has a lot of FL, so you might be imaging at very high resolution - any sort of guiding / PA error will show, and show a lot. So if you happen to have above condition, examine which direction aligns with star elongation - if it is DEC, then you need better PA, if it is RA, you might be having problem with guiding (this would mean that your periodic error is not guided out completely). If you have round stars in one corner, and elongation in other corners it can be collimation related, and to do collimation do following: Step 1: Put star in center of the field and defocus it. Make sure doughnut is concentric - this you will achieve by collimating secondary. Step 2: focus the star in center of the field and then using some sort of aid check how much out of focus it is in corners (don't change focus, just slew scope and take frame and measure FWHM of star, or put Bahtinov mask on and look at defocus). At this point you can figure out if you need to collimate primary or focuser, depending how much defocus is there in corner stars. This is a bit tricky to get right, but software like CCD Inspector can help. If there is linear gradient in defocus (for example two top corners have same amount of defocus, and two bottom corners have same amount of defocus but different one to top corners) - you need to fix the tilt and that is done by collimating focuser. If on the other hand you have "bowl" like distribution that is not centered on frame center - then you need to collimate primary. After collimating primary you need to go to step 1 and repeat. After collimating focuser / tilt you don't need to do it. So it is best to leave tilt collimation as last, and only if you do indeed have tilt. Here is a good guide that will help you out: https://deepspaceplace.com/gso8rccollimate.php
  18. And there I was believing we have a convert given your recent experience with 6" F/6 Newtonian
  19. Interestingly enough, most of the people in this thread while standing up in defense of refracting telescope design (me included) did little to counter actual arguments presented in article. Most of the things listed in article are in fact true. I think that it would be in best interest of OP and general community that participants of this discussion either state exact disagreement with particular point made in article, or provide alternative view to why refractors are indeed good (particular use case or even personal preference). I would not focus my attention to author either, everyone has a right to voice their opinion and their particular style might not suit us well, but we should be able to distinguish their preferences / views to actual claims (which we can subject to counter argument).
  20. I agree with most of the article, except the title - that one is 100% wrong Does that mean that I don't have and enjoy fracs? Noo ... have two of them, and looking at third (will still be holding on to two, SW ST102 will have to give way ...). Do I have a dob? Yes, and RC also. While most of things listed in article are true, that does not mean that refracting telescopes are no good. Even achromatic refractors. I don't think that anyone would be displeased with 4" F/10 achromat on AZ4. Both for deep sky and solar system. Well if you enjoy observing and you are not after more, better, gooder ...
  21. Honestly, I don't quite understand what you said But I would like to point something out. 2.4 pixels per Airy disk radius is theoretical optimum sampling value based on ideal seeing and ability to do frequency restoration for those frequencies that are attenuated to 0.01 of their original value. This means good SNR (like 50-100 SNR) to be able to do that, as well as good processing tools that enable one to do it. In real life scenario, seeing will cause additional attenuation of frequencies (but not cut off like Airy pattern), that is combined with Airy pattern attenuation and cut off. So while ideal sampling will allow one to capture all potential information - it is not guaranteed that all information will be captured. On the other hand, I just had a discussion with Avani in this thread: where I performed simple experiment on his wonderful image of Jupiter taken at F/22 with ASI290 to show that same amount of detail could be captured with F/11 with this camera. He confirmed that by taking another image at F/11, but said that for his workflow and processing he prefers using F/22 as it gives him material that is easier to work with. So while theoretical value is correct, sometimes people will benefit from lower sampling - if seeing is poor, and sometimes people will benefit from higher sampling simply because post processing and tools better handle such data. So bottom line to this is that one should not try to achieve theoretical sampling value at all costs. It is still good guideline but everyone should do a bit experimenting (if gear allows for it) to find what they see as best sampling resolution for their conditions - seeing and also tools and processing workflow.
  22. Dawes limit and Rayleigh criterion are based on two point sources separated so that second is located at first minimum of airy pattern. It is usable for visual with relatively closely matched intensity of sources (like double stars). Applying twice sampling to that criterion is really not what Nyquist theorem is about. There are two problems with it: 1. It does not take into account that we are discussing imaging - and we have certain algorithms at our disposal to restore information in blurred image 2. It "operates" in spatial domain. Nyquist theorem is in frequency domain. Point sources transform to rather different frequency representation even without being diffracted to airy disk (one can think of them as being delta functions). Proper way to address this problem is to look at what airy disk pattern is doing in frequency domain (MTF) and based on that choose optimum sampling.
  23. In some of my "explorations" into this subject, I came with slightly different figure than usually assumed. Instead of using x3 in given formula, value that should be used according to my research is 2.4 So for camera with 2.9um, optimum resolution for green light would be F/11.2 Here is original thread for reference:
  24. Yes, unity gain included (it also avoids quantization noise).
  25. Quantization noise is rather simple to understand, but difficult to model as it does not have any sort of "normal" distribution. Sensor records integer number of electrons per pixel (quantum nature of light and particles). File format that is used to record image also stores integers. If you have unity Gain - meaning e/ADU of 1 - number of electrons get stored correctly. If on the other case you choose non integer conversion factor you introduce noise that has nothing to do with regular noise sources. Here is an example: You record 2e. Your conversion factor is set to 1.3. Value that should be written to the file should be 2x1.3 = 2.6 but file supports only integer values (it is not up to file really, but ADC on sensor that produces 12bit integer values with this camera), so it records 3ADU (closest integer value to 2.6, in reality it might round down instead of using "normal" rounding). But since you have 3 ADU and you used gain of 1.3 what is the actual number of electrons you captured? Is it 2.307692..... (3/1.3) or was it 2e? So just by using non integer gain you introduced 0.3e noise on 2e signal. On the matter of gain formula - it is rather simple. ZWO uses DB scale to denote ADU conversion factor, so Gain 135 is 13.5db gain over the least gain setting. So how much is lowest gain setting then? sqrt(10^1.35) = ~4.73 and here it is on graph: Ok, so if unity is 135 (or 13.5db or 1.35 Bell), we want gains that are multiples of 2. Why multiples of two? In binary representation there is no loss when using powers of two do multiply / divide (similar to decimal system where multiplying with 10 and dividing with 10 only moves decimal dot) - so guaranteed quantization noise free (for higher gains, for lower gains you still have quantization noise because nothing is written after decimal dot). DB system is logarithmic system like magnitude. So if you multiply power / amplitude you add in db-s. 6db gain is roughly x2. (check out https://en.wikipedia.org/wiki/Decibel there is a table of values - look under amplitude column). Since gain with ZWO is in units of 0.1db - 6db is then +60 gain. Btw, you should not worry if gain is not strictly multiple of 2 but only close to it, because closer you are to 2 - quantization noise is introduced for higher values (because of rounding) - and with higher values SNR will be higher (because signal is high already).
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.