Jump to content

Stargazers Lounge Uses Cookies

Like most websites, SGL uses cookies in order to deliver a secure, personalised service, to provide social media functions and to analyse our traffic. Continued use of SGL indicates your acceptance of our cookie policy.

stargazine_ep15_banner.thumb.jpg.34f8495864951c81ec35e285b4d7b2e0.jpg

vlaiv

Members
  • Content Count

    5,673
  • Joined

  • Last visited

  • Days Won

    2

vlaiv last won the day on February 25

vlaiv had the most liked content!

Community Reputation

4,245 Excellent

About vlaiv

Profile Information

  • Gender
    Male
  • Location
    Novi Sad, Serbia
  1. Drift alignment method works with both guide scope / camera and primary scope camera - it is just a matter of convenience which one you'll use. For good drift alignment and general goto precision, it is important to have your mount tripod - level and stable (not moving). Auto guiding will correct for polar alignment errors, but it can't work miracles. The better your polar alignment is from the start - less work auto guider needs to perform. There is only one case where you don't want perfect polar alignment - if your mount has DEC backlash. In that case it is better to have slight polar alignment error which then causes DEC error to be to one side only (due to drift). Auto guider will correct this error and corrections will always be to one side as well. This prevents changing of direction in DEC and circumvents any issues with DEC backlash. PHD2 has algorithm called "resist switch" - and I think it is default DEC guiding algorithm. It is used precisely for this reason - small PA error is going to produce drift to one side only - everything else is either seeing or wind or mount roughness.
  2. Lockie has done several of these entry level scope reviews and I think it's worth having a look at each one of them. You'll find all the scopes mentioned here being reviewed in one way or the another - ST102, Mak102, etc ...
  3. Optics of said scopes is most certainly the same and in principle they should show you the same. What differs is usability and maybe some other features. Let's examine both pairs to see possible differences. In first example, AZ-EQ avant is a better mount. Although they look the same, az-eq is made so it can be used in both EQ and AltAz mode. With StarQuest that is probably not the case. I'm not 100% sure on that one but StarQuest looks like regular EQ mount from SW in design. Regular EQ mounts from SW can still be used in AltAz mode, but that requires modification and you can't really make them dual mode easily. This example of EQ3 shows why is that so: https://www.cloudynights.com/topic/272037-eq-mount-in-alt-az-mode StarQuest appears to be the same design. Then there is matter of tripod. It looks like AZ-EQ avant tripod is better quality / more stable and lighter one. As a bonus, you can take AZ-EQ avant mount head and attach it to any photo tripod (which means that you can upgrade tripod easily). In second case, scope is the same, but details on scope will be different. Regular ST102 comes with rings and dovetail and can be carried by different mounts that accept vixen dovetail. One on StarQuest has some sort of bolt connection to the mount head - less versatility. It could be the case that things on OTA are of lesser quality on StarQuest - for example plastic parts. This is done to save weight rather than to save money as StarQuest is really light weight mount. Something similar has been done with Newtonain scopes and Maks that go on this light weight mounts. Redesigned mirror cells - ones that don't support collimation and look like they are made out of plastic. Compare these two images: Second one does not have collimation screws and back end has been redesigned.
  4. Did you by any chance ever see atmospheric correction matrix (or calculated one), to save me trouble of doing it? I suspect that it will be different for different air mass, but just a few should be enough to cover most of the cases - like 90, 60, 45 and 30 degrees above horizon.
  5. I don't know technical details of how the CCM should be defined (as per standard or what is usually done). I did not calculate CCM on white balanced data - it was done on raw data without any other modification so I suspect it contains white balancing information "bundled" together. Is it customary to separate the two? First apply white balancing as simple red and blue scaling functions - like I did as an example above to compare it to matrix approach, and then derive CCM afterwards? My approach was fairly simple, and I'm sure that is how CCM is derived usually. Measure linear values of recorded and reference RGB. Scale those triplet vectors to unity length (to avoid matrix scaling the values) and then use linear least squares to solve for transform matrix. I can now see how CCM can be a separate thing. Similarly I'll now need to derive atmospheric correction matrix (ACM? ) - depending on air mass and other factors and it is also going to be "stand alone" type that can be applied after CCM while data is still linear. Here is base code for calculating attenuation that I've found and plan to examine and adopt: https://gitlab.in2p3.fr/ycopin/pyExtinction/-/blob/master/pyExtinction/AtmosphericExtinction.py I just really need simple attenuation per wavelength graph so I can again take a set of (reasonable) spectra that represent colors and apply atmospheric extinction to it and see what color it represents then. Same least squares approach as above.
  6. You'd think that it would be light weight but it is not. Well, it depends on what you compare it to - it is certainly light weight compared to HEQ5 or EQ6 class mount, but let's be realistic. Can you move it assembled in one hand? Head is approx 4.4kg, tripod is 5.7kg, it has 2x3.4kg of counter weights - let's assume you are using only one so 3.4kg. Let's also assume that your imaging rig has something like 1.5kg max. That totals to 15Kg. I know that I can lift a compact object weighing at 15Kg and carry it needed distance but I'm not sure how I would feel about bulky sensitive object of that weight. AzGTI with tripod weighs about 4kg (3.9kg). Add counter weight of 1kg and add another 0.5kg for wedge and 1.5kg of imaging kit and it totals to something like 6kg? EQM-35 Pro will certainly have less issues like backlash / PEC / tracking and goto precision and it can carry more. It is higher class mount / step up AzGTI, and that shows both in weight of the rig and carry capacity (both almost double).
  7. I think that we need to separate two things. Image that scope forms at focal plane and image that we see and/or record. Point a telescope to a planet or a star and image will be formed at focal plane of that telescope regardless of who is watching or if there is camera attached to the telescope. That image behaves as described above in post by @chiltonstar - it is original image convolved with PSF of telescope and projected at certain scale. Rayleigh criteria is equally applicable to point sources or low contrast features in what it describes. You can apply it to two stars (same or different intensities) or you can for example apply it to line pairs (again same or different intensity). It tells you how close these features need to be in order for image to show them as separate features. It boils down to whether a graph of light intensity will have a single peak or double peak. Only after all of this we can talk about ability of detector - be that eyepiece + human eye or camera sensor at prime focus (or any other configuration) - to record / detect above resolved feature. Human eye requires certain conditions to be able to capture details in the image that are important. It needs certain magnification and certain level of contrast - and these two are not in trivial relationship as we have seen from above graphs and examples with number 17. Similarly - camera sensor also needs certain conditions. It needs adequate pixel size compared to image scale at focal plane. It also needs enough integration to achieve good SNR for contrast of the features and so on. In above sense, aperture is the natural limit of what can be resolved and yes, if we talk about resolving (even very fine contrast details) - a very specific term different from detecting - it will resolve in focal plane according to Rayleigh criteria. If we on the other hand talk about recording that light intensity function without a loss, we can't use Rayleigh criteria any more, we need to use Nyquist. Similarly, if we talk just about detection - again we need to use other criteria in that case. What we should not do is mix criteria and "cross apply" as it might lead us to wrong conclusions.
  8. I believe it is quite the opposite - high number of captured frames is better image. There is limited amount of time you can spend on a target since planets rotate (even with derotation). Large number of captured frames means large number of good frames given equal probability of a good frame due to seeing at the time of the capture. Large number of frames stacked means better SNR and better SNR means that you can sharpen image more without bringing the noise to the surface.
  9. Problem with this explanation is understanding of what it means to resolve and what it means to detect. Take any old double star - I'll detect it with even very small aperture. I'll say - there is a star there. You'll ask "Is it a double star", and I'll reply - "Dunno, I have not resolved it yet". Take any image of Encke division and ask the same question - is it a single gap or dual gap? Or yet better - can you measure width of it from any of those images? You can't cheat physics. Telescope aperture is about resolving not about detecting. Well, that too - you need large aperture to detect faint stuff - you won't be able to see mag15 star with 3" scope for sure, but in this context we are talking about resolving. No amount of magnification, nor sharpness/quality of optics can make telescope resolve beyond what physics says it is capable / allows to be done.
  10. Indeed - it can only be used to compare same type scope of same aperture. Even if aperture is the same in size - strehl will not be meaningful between scopes that have different aperture "shape" - like unobstructed vs obstructed aperture. Both can have same quality wavefront but like two different apertures - MTF will be different between them. I would put it like this: Strehl ratio does not depend on wavelength - but there are scopes that have different optical characteristics with respect to wavelength and hence they will have different wavefront and strehl at different frequencies - namely telescopes with refracting elements. You could similarly say that SCT will have strehl that depends on back focus used? Spherical correction depends on mirror separation and position of instrument measuring wavefront accuracy (and hence mirrors separation) will change this.
  11. I don't think you are right here. Strehl is measure of wavefront accuracy and is ratio of two values for single telescope: (How much energy ends up in airy disk of actual telescope) / (how much energy would end up in airy disk for perfect telescope of same aperture). As such - it does not depend on aperture size, nor on wavelength used, or rather - It is just a number and as number it is comparable across scopes. What it can't tell you is different aspects of performance of the telescope. It won't tell you what sort of detail telescope should be able to resolve - that depends on aperture. It does not tell you light gathering capability of telescope, and so on.
  12. If you see something like 1/10th of a wave - that should stand for PV wavefront quality. Sometimes RMS figure is used and it should be much lower - like 1/10. Sometimes RMS figure is used and it should be much "higher" like 1/20. 1/4th wave is diffraction limited - and these days mass produced Chinese scopes are often better than that - Strehl of 0.8 or higher. I would say that 1/6th of wave is good optics, 1/8th is very good optics and 1/10th of wave and above is exceptional optics. These are all PV values (peak to valley). Two scopes having 1/6th of PV won't have same other measures - like RMS and Strehl. They will be comparable in other measures but there is no direct "conversion" factor between the two. Look at this table: 1/8 of spherical aberration equates to 0.95 strehl but same error from turned edge results in 0.993 strehl - which is much better. It also tells you that 1/4th wave PV is 1/13.4th RMS and 1/8th PV equals 1/27th RMS - again for spherical aberration.
  13. We can talk about what would be proper color in astro image, and no there is no way to ensure 100% proper color as our eye would see, simply because we want to show both faint and bright structure - we often compress dynamic range that is beyond human capacity to see at the same time. Humans can see that much dynamic range but eye employs tricks to enable us to do it. In bright daylight our pupils contract. In very faint conditions our pupils dilate and we also undergo dark adaptation. Color perception is different under these conditions - so we can't have it all at the same time. Anyways - there is what we could call "as scientifically accurate as possible" rendition of image. You don't need to necessarily expose for same periods of time simply because camera won't have required quantum efficiency in all bands - this is precisely why we develop these transform matrices. What you can do however is if you expose for longer - meaning more subs of equal length use average stacking method (instead of addition - but this is probably by default in almost every astro stacking software). If you use different sub duration - just mathematically scale data while still linear. You can think of it as - calculating photon flux per second for each sub and then using that as resulting data (divide each stack with corresponding sub duration in seconds). When applying flats, make sure flats are normalized to 1 for full illumination. Next step would be to apply above mentioned transform matrix to transform your raw rgb data into sRGB linear rgb. Following step would be to apply correction for atmospheric attenuation. This is what I'll be doing next for above data - we want image as it would look like from space not from here on earth where it depends on atmosphere and target altitude above horizon (air mass). In the end, you have proper sRGB balanced linear data and it is time to compose rgb image out of it. First step would be to create (or use if you took LRGB approach) - luminance file. You stretch it and not the colors. If you don't have it, you can calculate it from sRGB linear data by using following formula: Y component of XYZ color space is closely related to luminance perception of human eye. It is also related to L part of Lab color space (you can also convert it to L from Lab if you want perceptual uniformity - but this really does not matter if you are going to stretch your data, as L from Lab is just power law over linear data): You can see that luminance only depends on Y component (and Y of reference white point) and third power function (again there is split here - linear in low values and power law other wise as it mimics human vision like sRGB). Ok, so we have our luminance and we stretch it, denoise it, sharpen it - do all the processing until we have monochromatic image that we are happy with as our final image. Final step would be to apply color information to it. We take our linear r, g and b, and first step would be to do inverse gamma function on our processed L - to bring it into linear regime. Next for each pixel we normalize linear rgb like this r_norm = r / max(r,g,b), g_norm = g / max(r,g,b) and b_norm = b/(max(r,g,b). We get our final color components as: r_lin = r_norm * l_linear g_lin = g_norm * l_linear b_lin = b_norm * l_linear and we take r_lin, g_lin and b_lin and do forward gamma transform on them to get our final R, G and B in sRGB color space that represents our image. BTW, doing narrowband imaging and then trying to compose it into true color image is rather counter productive. There is almost no distinction in color between Ha and SII as they are very close in wavelength (both are just red and if you keep their true color you won't be able to tell the difference). You can true color compose Ha + OIII image and for most part it would have color that can be displayed in sRGB color system. However, like we mentioned before pure Ha and pure OIII can't be displayed in sRGB. If you want to know what sort of color you could expect from Ha and OIII data in true color - chromaticity diagram is your friend: Any combination of intensities from Ha and OIII wavelengths of light is going to produce colors that are along the black line in image above - line that joins respective points of wavelengths along the outer path. Since we use only linear transforms in color spaces involved (sRGB gamma is just encoding thing - there is still linear sRGB part and it uses regular matrix for transformation) - any linear combination of intensities a * Ha + b * OIII will be a line and under matrix transform will remain line and all colors that we can possibly get will be along that line. Btw, this is reason why sRGB colors are all inside triangle above - vertices of that triangle are sRGB primaries - pure red, green and blue of sRGB space and every possible color that can be made using linear combination of them lies on lines joining them - which covers whole triangle (green + red can create orange and orange connected with a line with blue can create some pale magenta hue or whatever other combination you can do). This means that if you do image of nebula or galaxy with DSLR / color camera and do proper color balance you can actually end up with some whiteish / green-teal type of color for Ha+OIII nebulosity. For narrowband images it is still best to keep false color approach and just explain which color is what gas (HSO, SHO, etc ...)
  14. For a bit more technical discussion on that topic, see here:
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.