Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

vlaiv

Members
  • Posts

    13,053
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. I'd say - leave them as they are. They contribute a certain charm to the image - they emphasize how strong that star is and add very interesting "compositional twist" to the FOV. Flocking spider won't help. Diffraction happens because of light / no light situation (blockage) not because of any sort of reflection. Using completely dark spider that reflects no light will produce the same effect. Only thing that you can do to change appearance of spike is to change the thickness of spider vane. Spider acts as sort of diffraction grating (it is single groove diffraction grating) and spike that you see is different orders of diffraction - each being longer then the other. If you zoom on the spike itself - you will see that it is "series of rainbows" - each next longer than the one before it: This is why you seem to perceive spike being far away from the star - it is there all the way - but you've hit rainbow part where camera is less sensitive and rainbow color fits with background in that part. In any case - changing spider thickness changes how dense are these rainbows (how much dispersion there is in diffraction grating). Thin spider wane produces longer rainbows and thick produces shorter but more concentrated rainbows. That is about it what you can do without staring to curve vanes to affect how dispersion works.
  2. Take a look at this as well: https://www.firstlightoptics.com/reducersflatteners/ts-2-inch-1x-field-flattener-for-f4-f9-refractors.html It might be a bit more expensive, but it has been around for quite some time and it has good track record (people even use it for visual because of long working distance) See this as example of how it performs:
  3. It can't be done the way you want it to work. Most FF/FRs are at least x0.8 reduction factor and you want to illuminate 35mm diagonal in 2" focuser. With x0.8 reduction, you will effectively squeeze 35 / 0.8 = 43.75mm of focal plane onto 35mm sensor. 2" Has at most about 47 - but closer to 46mm of clear aperture. It is 50.6mm tube and you need to put some sort of FF/FR inside that holds lenses in its body. That body will likely by at least 2mm thick - so we have 2mm on each side 50.6 - 4mm = 46.6mm of free aperture at best. Most FF/FRs work at 55mm away from sensor (or more) and are at least 40-50mm long, so it is safe to assume that there is at least 100mm between FF/FR aperture and focal plane. At F/7 (this is very approximate calculation) - you need 100/7 = ~14mm additional millimeters for beam to narrow down to focal plane which we have calculated to be 43.75mm - so you need at least ~58 of free aperture on flattener entrance to get what you want. Not something you can do with 46mm max. First order of business, if you want to go that route - would be to replace focuser for 2.5" unit. Then maybe get one of these: https://www.teleskop-express.de/shop/product_info.php/info/p12210_TS-Optics-REFRACTOR-0-8x-corrector-for-refractors-from-102-mm-aperture---ADJUSTABLE.html or this one: https://www.teleskop-express.de/shop/product_info.php/info/p11122_Riccardi-0-75x-APO-Reducer-and-Flattener-with-M63x1-Thread.html
  4. I might be wrong, but I believe that AS!3 has surface stabilization turned on by default? In any case - if your target is somewhat drifting over the course of video - parts that are not captured during whole video will be clipped as they will have less frames to stack. Not sure if this behavior can be altered, but one solution is to make sure your tracking is good enough to keep region of interest on the sensor the whole time.
  5. I did not read the whole thread, but I suspect you are debating 102 vs 127 and collimatable vs non-collimatable (if those two are even proper words ). I don't remember seeing 127 that can't be collimated, but I do know for a fact that there are two types of 102 - one with and one without collimation screws. In fact SkyWatcher released whole line of scopes without collimation screws - newtonians and that mak, all of which sit on their light weight mounts - like AZ Pronto and Starquest. That is star quest (image is from you tube video, it is hard to find back side of this model in regular image). While AZ Pronto line is clearly labeled - having S at the end of the model names. FLO website clearly states that these don't have collimation ability: Interestingly - Mak page does not say this, but it is also labeled with S: Another sign for 102 that I managed to track down - is the shape of front of the scope: vs regular mak 102 seems to have smooth edge at front of the scope. In any case, I ended up purchasing regular OTA version although bundled one was more affordable and with more accessories (that I wanted) - just to have ability to collimate the scope if needed. I haven't seen any pictures or otherwise found evidence of 127 without collimation ability.
  6. Second example that I gave is from optics that is quite a bit less corrected than yours. By adding UV/IR cut filter you can improve things considerably. Sure - ED doublet will never be as corrected as APO triplet, but it can come really close and star bloat can be kept at unobtrusive levels. Don't look at my technical ramblings above in light of your work - but rather as explanation of what I did and why - main point was to determine where the star bloat comes from, and conclusion is - from lack of UV/IR cut filter.
  7. @Swoop1 Here is another example of such comparison on some stars that I did some time ago: This shows very strong chromatic aberration (unlike ED glass) - from left to right: red, green and blue channel. You can see that red is a bit more bloated than green and blue is even more bloated (even higher defocus). When you combine such channels - you end up with this: See how you can see chromatic aberration in stars - there is purple halo around big one and others have reddish ring around them.
  8. Ok, so first order of business is to include UV/IR cut filter. I was just trying to assert if you used filter or not. Doublet scopes, even better corrected ones like ED scopes can only bring two wavelengths of light into a focus. Above curve represents "defocus" depending on wavelength of light. I drew two red lines - examples of where you can focus your scope (literal focus position by moving focus in / out). If you focus at bottom line then green will be in focus while red and blue parts of spectrum will not. If you focus on upper line - then things will be a bit more balanced - green will be a bit out of focus but so will red and blue. Anyway - by examining stars per separate channel - you can see how much defocus there is in each channel and you can sort of guess things - like how well corrected scope is, if one used IR/UV cut filter and so on. This is because above curve is not symmetric - for same focus shift - blue tends to be more defocused then red. This asymmetry is bigger if you don't use UV/IR cut filter. In any case - I tried interpreting what was going on in your case, but I got it wrong. There are several variables and given that star bloat is similar in different channels (wavelengths of light) - I assumed that you in fact used some filter and that any small difference is due to how the scope is corrected (above curve can be tilted more towards short wavelengths or long wavelengths) and how well you focused. Sometimes it is very hard to find best focus spot with doublet scopes - precisely because there is no best focus - you have to judge visually or let computer calculate based on stars in the image and take some "best" average of star profiles. Btw - here is image of different modes of optimization of doublet scope: So you can have C-e line brought to common focus, C-F line or d-F line for example - which tilts this curve differently (and makes different bloat per channel).
  9. Yes, but I don't think it was the case - here is per channel comparison of the largest star: There are 2 green, one blue and one red images of that star extracted from bayer matrix of posted sub. It looks like similar level of bloat in green (1 and 4) and red (3 - based on background level). Blue seems tightest and my guess would be that this is due to correction of the scope and focusing.
  10. @Swoop1 Did you use UV/IR cut filter? If not - star bloat can be because of that. ASI290MC does not have one inbuilt.
  11. Make sure you have IR/UV cut filter. You camera is mono and it has only AR coated window and you will be using that with refractive optics (even if it is a well corrected one - you still want to filter out IR and UV parts of spectrum to remove excessive bloat/blur). Other than that - nothing special. Setup your mount in EQ mode, do decent polar alignment and start of with shorter subs (15-30s) - see how you get on with those.
  12. Magnification of a barlow is affected by distance between its lens and focal plane and depends on focal length of barlow lens itself. In order to calculate exact magnification - you need to know this focal length of barlow element. Some companies publish this data. For example, Baader VIP barlow has Note that focal length is negative - this is because barlow is a negative lens (it diverges rays rather than converging them like positive lens). In any case, magnification given by barlow lens is calculated like: magnification = 1 - distance / focal_length (you must use negative focal length in above equation). For Baader VIP barlow to work as prescribed, distance between barlow element and focal plane must be: 2 = 1 - (X / -65.5) => X/ -65.5 = -1 => X = 65.5mm. If you change that distance to say 80mm you will actually have: magnification = 1 - (80 / -65.5) = ~ x2.22 Prescribed magnification is usually achieved when you place focal plane at the "shoulder" of the eyepiece inserted into barlow lens - or when you put focal plane at the end of barlow body: To answer your question - if you place barlow in front of diagonal you will get quite a bit bigger magnification than prescribed one.
  13. AstroImageJ is certainly worth looking into - it has a lot of astronomy related tools, mostly aimed for photometry and astrometry (it uses external plate solver I believe). However, blink sort of thing is rather easy to do in regular ImageJ (above AstroImageJ is just bunch of tools build on top of regular ImageJ - both are open source and quite extensible with plugins and macros) - it has feature called Stack - which represents sequence of images that you can work with. For example, to quickly see if there is something interesting in the data - I'd use plugin that registers frames based on some features (it'll use stars if you select local maxima and tweak some parameters) and then you can simply create standard deviation projection of that aligned stack. This will show if there are significant outliers in any of the subs. If there are any - then you can start "blinking" / searching for them and so on.
  14. Very good point from @The Lazy Astronomer above - what's been said is lower bound (and not a strict one). If you feel comfortable with going with longer subs - by all means. There will be less of them to store and manage if you go with longer subs. Sometimes it is better to have more data to work with and in case of accident (like bumping the mount or someone shining a torch towards the telescope) - there is less data to waste if you go with shorter subs (it is better to discard 1 minute than 5 minutes of your imaging time). Satellites and passing planes are handled by sigma rejection, so you should not count those as "discards".
  15. In that case - just check ADU value of highlight in your image (Star core) if it's 65000 something something - divide background value with 4 to get electrons, else if highlight is around 16384 then use number you measure as average / median background (select patch of empty sky to measure value)
  16. Not really sure, but all it really takes to measure it is calibrated sub form one of you previous recordings and knowledge of gain you used. ZWO publishes read noise vs gain for their models - so you can use that: and all you need to do now is to measure sky background in electrons - which is a bit tricky part because you need to properly convert ADU units that you measure back to electrons via e/ADU which is also published for selected gain by ZWO: Once you have average background signal and read noise - then it is easy. Say your background is 200e and your read noise is 1.5e. Important thing to note is that LP noise is equal to square root of sky signal so LP noise will be ~14.14 (square root of 200). That is ~x10 larger than read noise, so you can halve LP noise which means 1/4 of background signal (signal is square of noise) - In above example you can use 1/4 of exposure time you used to make that sub which has 200e background and 1.5e read noise. Makes sense? Btw - there are couple of threads on SGL that go into depth how to measure background / convert from ADU back to electrons (it's sometimes tricky as it depends on bit depth of your camera and so on) - so do a search and see what you can find. There might even be software that will do above for you. I've glanced over some discussions - maybe SharpCap can do it or some other similar software?
  17. I'm saying that decision on individual exposure does not depend on how faint your target is. Here is simplified explanation. Say you take one 300s exposure and you take two 150s exposures and add those 150s exposures (not average, but simply add them, although averaging and adding is really the same - but let's keep it simple). Only difference between those two exposures - 300s in one go or 300s in two goes will be how much read noise there is. Everything else is the same - in both cases you accumulated 300s of everything - target signal, sky signal, dark signal ... it all grows linearly with time and it does not matter if you split accumulation into multiple parts. All except read noise. Read noise grows with number of exposures as each exposure adds one "dose" of read noise. Ok. Now onto the next step. Noise adds like linearly independent vectors - like square root of sum of squares. Same thing as Pythagoras theorem, right? In above image AC is larger than both BC and AB, so is FD compared to EF and DE - but notice one thing - as one side of triangle becomes much shorter than the other - hypotenuse starts being just a bit longer than other side. This is the key for determining sub duration - you want hypotenuse (total noise) to be almost the same as largest noise source (LP noise) - and this happens when short side of triangle (read noise) is much smaller than long side of triangle (LP noise). This only works when stacking - it makes no sense for single exposure. Final difference in noise in stack (signal will be the same regardless - there is certain amount of signal in 300 seconds how ever you slice it) will be when read noise is sufficiently small compared to LP noise. Ratio of the two is important, and I advocate ratio of 5 - have your LP noise at x5 that of read noise then total noise will be just: total = sqrt(lp_noise^2 + (lp_noise/5)^2) = sqrt( ln_noise^2 * (26 / 25) = lp_noise * sqrt(26/25) = lp_noise * 1.0198 = lp_noise increased by ~2% You'll be hard pressed to see difference of 2% in noise increase - if read noise is 1/5 of LP noise - it will be like LP noise is %2 higher and there is no read noise. Makes sense? In any case - target brightness does not play a part in above explanation unless target is brighter than the sky and then we should use target shot noise to swamp read noise rather than sky shot noise / lp noise.
  18. Does not depend on target unless target is brighter than light pollution (which is rarely the case). Read noise has to be compared to highest source of noise that depends on time (dark noise, target shot noise or LP / sky shot noise, last one is usually the highest when we use cooled cameras).
  19. Depends on your setup and best approach is to measure it.
  20. Looks to me like some sort of cable snag. You have the same issue in both RA and DEC - which is unusual because DEC should really "stay put" without too much corrections, however you graph is almost the same in both axis: I'm guessing that cable is pulling on OTA in such way that it moves both axis and when you do RA correction - it actually manages to make cable give a little and it takes more for DEC to recover - then cable tightens up again, RA correction manages to get a bit more slack of the cable and DEC slowly follows. Binding would produce similar RA behavior but I'm not sure it would have the same impact on DEC though.
  21. It looks it happens fairly regularly - here is interesting pass on 16th of February next year:
  22. Interesting question, and yes - you have the answer - do shoot moon separately and Pleiades separately and do composition, but main question for me is how often such phenomena occurs? Lunar declination can reach 28 degrees and Pleiades are at 24 degrees - so it is possible - but for moon to be in the same spot in the sky - I guess it's not that often?
  23. @OK Apricot Here is summary on my part: - It is ok to have small pixels on long focal length as long as you understand that you will be over sampling and that the way to recover lost SNR due to this is to bin you data afterwards - You should aim at about ~1.2"/px sampling rate with 8" EdgeHD - but actual sampling rate that you should strive to can be measured given your conditions by following formula: FWHM you usually achieve / 1.6 = sampling rate. If you manage 2" FWHM on average, then you should bin your data to be close to 2 / 1.6 = 1.25"/px. Don't stress too much if you are off by some margin, and I advocate slight under sampling rather than slight over sampling as it will produce better looking image in post processing as you will be able to keep the noise down better and apply better sharpening. - Keep your guide RMS as low as possible. Rule of the thumb Olly mentioned - is just rule of the thumb - reality is, lower RMS = better image, or rather tighter stars / smaller FWHM. Seeing, scope aperture and mount performance add up in a certain way (not like normal numbers but rather as square root of sum of squares) so that if any of the three component is small compared to others - it contribution diminishes - so you don't need to go overboard with chasing good RMS if your seeing is not very good or you are using small aperture - like 80mm or so. For most setups being with guide RMS at half of sampling rate (provided that sampling rate is sensibly chosen) - simply starts to reduce its impact - and hence that rule of thumb.
  24. Could be, it is really hard to tell as differences are very tiny and they arise from different sources. Sometimes choice of algorithm can have significant impact. Level of stretch and amount of sharpening also, so processing also must be accounted for. That trick with Fourier analysis is very good one as you can vary how much of higher frequencies you decide to cut off thus allowing you to "dial in" the sampling rate - but it is best done on linear data (as most measurements are).
  25. That was to emphasize that if you are properly sampled and you remove high frequencies it will make a difference - but if it does not make a difference - well, you are not properly sampled and there is no data in high frequencies - which means you are over sampled rather than under sampled (under sampling will have data on all frequencies but will exhibit aliasing artifacts as well).
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.