Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

vlaiv

Members
  • Posts

    13,106
  • Joined

  • Last visited

  • Days Won

    12

Posts posted by vlaiv

  1. 2 minutes ago, david_taurus83 said:

    I dont understand how an increase of x2 SNR is not beneficial?

    Unless theres another practical method that Pixinsight doesn't possess, that's a hell of a lot of subs to manually open up and bin. I collected 1200 x 30s subs on a target this year!

    Has anyone captured binned data and compared that to the same data, captured at the same time at full resolution but binned in software?

    I appreciate the maths might not stack up but isn't the whole idea of the hobby about creating pleasing images to the eye?! What I mean is how does one image look compared to the other?

    I don't think above refers to usefulness of on chip binning with CMOS in terms of SNR gain. It refers to the fact that doing it on chip provides only one benefit - smaller files.

    There are several benefits when doing it in software - look at my post above.

    Both on chip and in software will provide same SNR gain and will be equal if "standard" binning is used.

  2. 50 minutes ago, jimjam11 said:

    My understanding is that this is not optimal. There is limited benefit to binning cmos at capture time because there is no read noise gain. If you had excellent seeing but captured binned then you are effectively throwing away resolution (especially up at 2.8" pp). You also get more control of the binning by doing it post-capture.

     

    (You do of course gain in data volume by software binning but hard disks are cheap)

    Indeed, no benefit in doing it on chip, except for file size / download time.

    Doing it in software leaves much more possibilities for data manipulation - like hot pixel exclusion, optimized binning based on SNR (examining master dark for statistics will tell you which pixels contain how much noise - there are more noisy and less noisy pixels and you can bin adaptively). You can also do "split" bin instead of regular (better option for preserving sharpness) and of course fractional binning.

  3. 15 minutes ago, discardedastro said:

    https://www.canford.co.uk/CANFORD-RJ45-CAT5E-SCREENED-PATCHCORDS-Using-Cat5E-F-deployable-cable

    https://www.canford.co.uk/Products/31-850_CANFORD-CAT5E-F-CABLE-Black is the cable used. It's standards-compliant and will carry a gig at the usual distances, but it's stranded-conductor and so flexes really easily.

    That cable does look like something that will do the job - its made of just the right materials to give it flexibility.

    • Like 1
  4. For that you want Cat-5e cable, UTP variety, stranded.

    Cat-6 is usually shielded and more tightly wound - which makes it firmer.

    With really flexible cable, you might only reach 100mbps speeds, even if your network gear is gigabit. This is because all the "tech" used to enable high speeds will make cable stiffer - like more tightly wound pairs, shielding between twisted pairs and full shielding (prevents interference / crosstalk and such - thus enabling higher speeds).

  5. 1 minute ago, robin_astro said:

    A while back I wondered if it might be possible to use a conventional flat (without the Star Analyser) to get rid of flat defects produced at or close to the camera sensor ie dust and pixel to pixel variations. It failed with dust (probably as the geometry of the beam in the dispersed spectrum is different). It should work with genuine PRNU pattern defects though if they are an issue I think

    Robin

    I was just about to ask that, but yes, good point, with angled beam, flat taken without SA will correct only zero order dust shadows - maybe handy for background subtraction, although background should be uniform so it's just an offset and not pattern.

  6. I can offer alternative explanations:

    1. Over time, one tends to gather more data, so both SNR and processing skills increase. Microlensing artifact is in fact light signal, so it will be more obvious if you have better SNR data or you improved your processing to push the data further. Don't really think it will worsen on its own - it would mean that composition of coatings is changing or maybe coatings are getting thinner or something.

    2. If they are in fact artifacts of micro lens, then resolution should not have impact on it, or at least it should have opposite effect of what you are describing. Star will be tighter on lower resolution, so light will be more concentrated, and that might lead to stronger reflection / diffraction issues, hence stronger micro lens artifact. There is another explanation that can explain difference between two scopes - speed of light beam. I think that there is strong dependence on both wavelength of light and speed of light beam for particular setup.

    One setup with for example F/5 beam could have issues in Ha and no issues in OIII, while other setup with F/7 beam could have quite opposite - Ha without issues and OIII with issues for example.

  7. 15 minutes ago, robin_astro said:

    Flat correction is useful for slit spectrographs and I used to flat correct Star Analyser spectra though no longer recommend it. The problem is spectroscopic flats are a mixture of position and wavelength dependent effects. With a slit spectrograph the position of a particular wavelength in the flat is fixed and defined  by the position of the slit so a conventional flat taken through the spectrograph can be used but in a slitless system any particular wavelength ends up at any location across the flat image. 

    Professional slitless systems also have this problem and the solution for them is to build up a 3D flat ie a separate "flat" for each location in the image. This is obviously impractical so the advice I generally give now is:-

    Keep the sensor as clean as possible to minimise dust donuts and place your reference star and target at the same location in the field. The instrument response will then take care of any vignetting type issues

    If you want to explore the possible errors due to not taking a flat or you are forced to measure spectra at different positions in the field then I can suggest taking spectra at different locations and seeing how much the spectrum changes  for your particular setup

    Cheers

    Robin 

    I'm sort of worried by pixel to pixel differences of my sensor (ASI1600), as I've noticed particular pattern - checkerboard pattern is present for some reason. QE is not varied greatly (1-2%) and with regular imaging flat calibration works well, but like you said - here it can be a bit of a problem. I'll try different approaches to see how much difference I get in spectrum.

    Here is "zoomed in" section of my regular flat for this sensor (note it is in fact mono sensor and what you are seeing is not bayer pattern):

    image.png.df76f34c782d0b5836f6ef562b2dff7c.png

    Pixel to pixel variation is real and not noise - I used x256 flat and flat dark subs to create it, and each sub has SNR of about 50 or so, stack has SNR of about 850-900. Variations are, on other hand, few percent (about order of magnitude larger than any residual noise).

    Here is "profile" of the flat (flat has been scaled to relative QE):

    image.png.b7eb4ed5a282701495ed41c74b090018.png

    I think there are at least couple of "wavelengths" present here - one very fine on pixel level, and one maybe 4-5 pixels in wavelength. In any way, I think it will result in strange pattern in spectrum even if I over sample (and I will by factor of at least x2-x3 in order to get good dispersion and minimize seeing effects, I can bin data later to improve SNR back a bit).

    I will certainly take flats, and I'll make sure that I place zero order star at roughly the same position for each measurement (SGP does have crosshair in preview so I can use that). After that I can try both with and without flat calibration to see which one works better.

  8. While we are on this topic, hope no one minds me asking another question related to spectrum calibration.

    How do I do flat calibration? Same as with any imaging - just place flat panel on scope with the same gear arrangement (focus included) and I take subs? I divide spectrum image after dark calibration as normal?

    I'm asking because there is issue of vignetting - spectrum can "overlap" parts of "zero order" of vignetting and flats in this case will maybe produce wrong results? Or maybe it does not matter at all, and we flat calibrate to eliminate pixel to pixel QE variations and dust on sensor / any filters after SA?

    I assume that it might not matter because any vignetting will be incorporated in system response, but that means that recording needs to be done with zero order image at exact same spot each time (for both reference star and one that we are measuring)? It also means that spectrum orientation should remain the same with regards to sensor (but not necessarily to star field - as we can rotate whole assembly to get spectrum clear of background artifacts) - that is probably better as we want spectrum to be horizontal anyway, I guess.

    Anyone tried background removal procedure with "clear exposure"? I understand that it can be tricky as SA needs to be removed from optical train which can lead to slight focus change, and one needs to do linear fit on zero order stars to match intensity.

    I guess one can monitor FWHM / HFR to get roughly the same defocus of zero order image.

    • Like 1
  9. 12 minutes ago, Nigella Bryant said:

    Thanks vlaiv, 

    So the dip I've marked in black is not a camera response but something else. Is it part of earth's atmosphere as it doesn't match the reference star spectrum. 

    I did not say that - I've mentioned that it is in fact camera response (transition from green to red) and that there is another one like that, but less pronounced - transition from blue to green.

  10. 3 minutes ago, dph1nm said:

    Sorry vlaiv, but this is misleading. Software binning does not change the S/N per sq arcsec on the sky, so the two halves of your images should look identical. It looks like you threw away 3/4 of the data when downsampling the unbinned shot.

    NigelM

    Not sure why you are saying it is misleading?

    You are in fact correct than S/N per surface area is not changed, but point of binning is to trade "pixel surface area" for S/N improvement - one gets coarser sampling rate by doing this. I did point out that I rescaled unbinned shot to make creating composite easier and to make it easier for people to see the real improvement in S/N. I also made it clear how I did it - by in fact "throwing away" 3/4 of the data, but in doing so I did downsample without affecting "per pixel" SNR of original image, I did not introduce pixel-to-pixel correlation, nor did I affect sharpness of the image apart from any removal of high frequency signal due to using coarser sampling (which in fact does not happen in this image since image is oversampled as is).

     

  11. Since you are using OSC camera, you will have two major "dips" in camera response - R-G transition and G-B transition. The one you marked is R-G transition. Here is "generic" response curve for that sensor:

    image.png.51a389c52c2883ae73771eb899b1940a.png

    So first dip B-G is at about 480nm, and second is at about 580nm

    In your spectrum, you can also see first dip but it is not as pronounced:

    image.png.21ca89966a44990f64eb8100e0e3fd27.png

    In any case, way you create instrument response (both sensor and optics if coatings don't have uniform frequency response, and often they don't) is to take your spectrum and divide it with reference spectrum of that star. This means that you should record the spectrum of a known star - one that you have reference spectrum of.

    You will also need to "smooth" result or "smooth" reference spectrum if it is more detailed than your spectrum, otherwise features of reference spectrum not recorded in your spectrum will be part of instrument response (but they are not).

    Here is video tutorial for RSpec that describes that process (RSpec is used to make the video - but you can figure out important steps and reproduce those in software of your choice):

    https://www.rspec-astro.com/videos/InstrumentResponse/InstrumentResponse.mp4

     

    • Thanks 1
  12. Actually you can if you are not careful.

    OAG prism contains straight edges, and if you push prism too far into light path so that it casts a shadow on sensor, which can be corrected with flats - it will cause single spike perpendicular to OAG edge that is causing (not X type like in reflectors with spider support).

    Here is an example of it:

    image.png.95bb9e5e75715124aa7bdb94eaba9e36.png

    Or maybe this one shows it better:

    image.png.6608d3c6ccbcc897b9a7758e837312bc.png

    or this one

    image.png.eebabc73f2365f7fca61100e5e212c91.png

    I was not careful enough, so OAG prism did shadow a bit of sensor, and only stars in that part of frame were affected (and only bright ones).

    Luckily this can easily be sorted out by moving prism out of the light path that hits main sensor.

    • Like 2
  13. 2 hours ago, david_taurus83 said:

    Thanks. I will have a look at IntegerResample. Would this be best applied to subs prior to stacking? I dont suppose it offers a batch method..

    Yes, after calibration and before stacking. I don't use PI, so can't tell if there is option for batch, but I don't see why it would not work.

  14. 1 hour ago, david_taurus83 said:

    Thanks vlaiv. I kind of get the first bit. CMOS binning does increase signal but it also increases noise by the same amount. Youd still need the same number of subs to overcome the read noise. So no real benifit apart from smaller files.

    No. Don't think in terms of increasing signal and increasing noise. Think in terms of ratio of the two. Depending on the type of stacking / binning you are using, signal can increase or stay the same.

    If you are adding things - signal will increase. If you are averaging things - it will stay the same. Regardless of what method you choose, SNR improvement is the same.

    Read noise is not only source of noise, and if you properly expose your subs - it should have smallest impact (or rather - your sub duration should be such that read noise is no longer dominant source of noise). Therefore binning improves SNR regardless if effective read noise is increased in case of software binning. In fact - you should not look at it that way, with software binning - read noise "remains the same", but hardware binning is slightly reducing read noise. Both however improve signal to noise ratio.

    1 hour ago, david_taurus83 said:

    In your example, the software binned image is on the left and the simple resample (50% down sample?) is on the right, yes? What process did you use to software bin? I resampled an image last night by 50% as the data wasnt good but I was mistaken by thinking this is what you refer to as software binning.

    Depending what software are you using to process your data, software binning (true binning) can have different "name". For example if you use PixInsight - software binning is called "Integer Resample". You should use integer resample with average method for binning your data.

    Here is another "synthetic" example:

    image.png.f1917ba8f9d9908c820c98efd3ca2b9d.png

    This montage is composed out of 4 panels - top right is original panel with gaussian noise with standard deviation of 1. Other panels are produced by:

    1. binning 2x2 - average method

    2. Simple bicubic interpolation

    3. Cubic-O-MOMS interpolation

    Here are results of measurement of noise in each sub:

    image.png.8425b5473bb532b00fffd63fc4eb24a4.png

    First is reference sub, and it has StdDev of 1 (noise with value 1).

    Next one is binning 2x2 - it results in very predictable improvement of x2 (noise is less by a half or - 0.499....).

    Simple bicubic interpolation reduces noise even further to 0.4, or by factor of about x2.5, but in doing so creates correlation between pixels and slightly blurs the image.

    Cubic O-Moms is last, has the least SNR improvement by factor of x1.25, but should be the "sharpest" method.

    In this example I did not examine impact on sharpness on the image - just on the noise. However, you can clearly see couple of important points:

    1. Binning produces exact reduction in noise by factor of two (if signal remains the same - average method)

    2. Quality resampling method like Cubic O-MOMS has less of SNR improvement (in this case factor of x1.25)

    3. Every downsampling method does improve SNR to some extent, and some even more than binning - at expense of resolution (not seen in this example, but that is what happens).

    • Like 1
    • Thanks 1
  15. 1 hour ago, david_taurus83 said:

    Hoping someone can explain this in a bit more detail without my humble brain crashing. I've seen it mentioned a few times that the ASI1600 CMOS sensor doesn't bin in the same sense as a "traditional" CCD chip where the CCD gives the added increase of SNR. I've always assumed that when I select 2x2 with the ASI1600 that the capture software was "binning" the downloaded image. To my eye, the sub does look brighter and appears to have more detail. But I assume that the noise is also increased so all I've achieved is to make the image smaller with no real increase in SNR. I've seen @vlaiv mention on a few occasions that the subs could be processed as normal at full resolution and then could be resampled later in processing. Are there any steps on how to do this? Any comparison pictures out there? I've seen plenty of chatter about software binning but does anyone regularly use this approach?

     

    Currently, my image scale is 2.33"pp at 336mm and 1.39"pp at 564mm with my 2 refractors so no need to bin there. But I've been thinking about a longer focal length for small targets, galaxies etc. Something along the lines of a 6" RC at 1370mm but with an image scale of 0.57"pp. My guiding would surely be tested at that plus it would be massively over sampling with my rubbish skies. Hence the question at hand.

    Ok, here is sort of as simple as it can get explanation of binning, and related CMOS vs CCD thing.

    - Binning increases SNR in the same way stacking does (or rather very similarly, there are small differences not important to this discussion). When you stack 4 images - you are adding / averaging pixel values between those images. Binning is adding / averaging 4 pixel values in 2x2 matrix - in principle same thing. So binning is always increasing SNR of recorded data predictably - bin 2x2 you increase SNR by factor of 2 (same as stacking 4 subs). Bin 3x3 - you increase SNR by 3 (same as stacking 9 subs, as you are adding/averaging 9 pixels)

    - Difference between hardware binning (CCD) and software binning (CMOS) is what recorded data is being binned. With CMOS you are binning completely read out data, while with CCD you are binning electrons prior to them being read out. There is subtle difference between the two - CMOS binning is after each value had read noise added to it, while CCD binning is done prior to analog-digital conversion and read noise is added to binned value. Difference between the two is - hardware binned pixel (or group of 2x2 pixels) has regular level of read noise - same as single pixel. Software binned pixel has twice higher read noise than single pixel. In another words - if CCD camera has read noise of 5e, when you bin 2x2 and get sub with less pixels (and higher SNR) it will also have read noise of 5e. On the other hand if CMOS sensor has read noise of let's say 1.7e (like ASI1600), after you bin it - it will behave as camera with larger pixels but each pixel having 3.4e read noise. That is the only difference between software and hardware binning (if we bin in "regular" way, software binning can have certain advantage related to pixel blur, but that is "advanced" topic).

    - as for resampling, binning is a sort of resampling, but it is not only way to resample image. There are couple of things that resampling or rather downsampling does to your data: it creates less pixels covering same FOV - hence it reduces sampling rate, it changes SNR of your data and has some effect on pixel blur and possibly introduces correlation between pixels (last two have small impact on sharpness of the image). Different resampling methods give different results in all of these except producing same sampling rate. Binning is predictable - it creates larger pixel so SNR increase is known, and it adds pixel blur. It does not add correlation between pixel values. Other forms of resampling can add less of pixel blur so image will be slightly sharper, but SNR increase is less (sometimes even more, at expense of noise statistics because of correlation) and there will be correlation between pixels (that has to do with noise statistics of the image).

    I regularly bin my data in software when I record it with 8" RC (1624mm FL) and ASI1600. Native sampling rate is around 0.5"/px and that is of course way too much. Sometimes I bin it 2x2 and sometimes 3x3 - depending on conditions of the session.

    Here is an example of binned image vs unbinned image that clearly shows improvement in SNR:

    image.png.c3d90200fb592c72bd16db3cf1efcc99.png

    This is single 60s uncalibrated sub of M51 taken with above gear. I created a copy of that sub, and with first copy I did software bin 2x2, while with other one I did "downsample" in particular way - one that really does nothing to the image except change sampling rate - so no correlation between pixels, no pixel blur and no SNR change (I simply took every other pixel in X and Y and formed image out of those - no addition, no interpolation, just x2 lower sampling rate to match the scale of binned image). Then I pasted part of second copy over first copy and did basic linear stretch to show result in such way that can be easily compared.

    One simply cannot not notice improvement in SNR (it is x2 improvement in SNR). Btw, above image is 50% scaled down, but here is the same image at 1:1 zoom:

    image.png.1e21a9403b24b45bca1deed8ba5a6ecd.png

    (notice that scaling down for display, which also uses downsampling, improves SNR a bit, so scaled down version above does look smoother than this one at 1:1, but in order to really compare results - we need to measure resulting noise in different approaches to determine how each one behaves - I can do that for you as well if you want - make composite image of 4 different approaches - bin x2, above splitting of pixels, simple resample, and advanced resample technique and do some measurements of noise - as well as star FWHM so you can see the impact on resolution).

    • Like 4
    • Thanks 1
  16. I think that simplest DIY flat box consists of following:

    1. Acrylic light diffuser panel

    Do google search, these should be fairly cheap and you can get them cut to size.

    2. LED strip

    3. Some sort of box to put it in.

    4. optionally you can get PWM dimmer for led strip

     

  17. 24 minutes ago, andrew s said:

    @vlaiv practically focusing on the spectrum is difficult at the best of times. You could step through with a very accurate automated focuser.

    I am not sure how you would identify the good bits and stich the spectrum together.

    It might be practically simpler to use a grism or go fof a parallel beam arrangement.

    Do you have a specific project in mind or just enjoying seeing what can be done?

    Regards Andrew 

    No specific project in mind. Actually there is - more of a challenge then a project. Most of things to date were just theoretical, and recent discussion fueled my interest in actually going out and capturing some spectra. I've not used my SA200 much so far. One failed capture of spectrum (at the time I had no idea how to use it properly) - very poorly focused, I'll attach profile of capture without any processing - just to show that almost no features are visible. Did observe spectra of stars on couple of occasions.

    I want to try to do star classification for purpose of determining approximate distance to stars (photometry and stellar class), but for the time being will settle just at practicing and capturing some decent spectra. Given the time it takes to setup everything, I want to make the most out of the session, instead of just capturing single spectrum (and reference for calibration) - that means a list of techniques to try out, and list of stars. Next will obviously be to see what sort of resolution can I realistically reach, by using different approaches - might even have a go with different scope - F/10 achromat stopped down a bit - it will have less light grasp, and focusing will be even more tricky due to different focus positions across the spectrum, but multiple focus positions should take care of that as well. It does offer very good theoretical resolution - over R600 (if we exclude effects of field curvature by means of stepped focusing).

    All of this is really planning stage for some time under stars (which have been scarce lately, and not only due to poor weather, most of the time I can't be bothered to setup everything for some reason, so hopefully this fun activity will spark interest again).

    Here is "failed" spectrum, I don't even remember gear I used to capture it, but I do know it was color camera. I did max stack of channels to produce mono image, but transitions between colors can be clearly seen in outline:

    image.png.9718a043e2f58b9db4149df6c583efcc.png

    image.png.dbb0d37868cd34db22625a2d239c7420.png

    Can't really say I'm seeing any features in it :D

     

  18. Although we are now diverging from main topic - which was about angles of exit beam, I'll just comment on SA200 part.

    In fact, in the light of a new day, I think that my original question for this thread was sort of silly. I'm don't quite understand it yet, but it sort of stands to reason that exit beam must be of exact shape as F/ratio of the scope even in folded design - because focal length is function of geometry (and aperture stays the same).

    Back to SA200. @Merlin66, this all in fact started by me examining trans spec and wondering why one can't have more resolution than about R200 with grating (as stated in recent thread in spectroscopy section).

    That lead me to finally understand coma and focus issues (which are consequence of grating equation and the fact that parts of beam are not perpendicular to grating), and after knowing all of that, I managed to "tweak" parameters to get much better resolution of spectrum. "Splitting" spectra by different focus points and then combining them should improve things further.

    Here is example of calc for my 8" F/8:

    image.png.e5354925e9954af225ee47d34df0c8cf.png

    I outlined worst offender - field curvature of the grating. By using multiple focus points (like say 4-5, and doing parts of spectrum separately and then joining them at the end), next problem will be coma. If I change calc so it does not include focus issues, here is resolution that I'm getting:

    image.png.095e3a448ba3d18804e60cd6e65215ac.png

    We can do a bit more tweaking to further improve things - add more dispersion to bring down impact of seeing (at expense of SNR, but that can be sorted with longer integration time / stacking / binning) and controlling coma with aperture mask:

    image.png.12887066cb86b1b095d2cc03b199209f.png

    Now ~R500 is quite decent for simple device like SA200.

    28 minutes ago, andrew s said:

    @vlaiv you can mitigate the spectral curvature by tilting the sensor with respect to the optic axis but then it makes the rest of the field defocused.

    Also you need to keep the first order on the image to allow calibration of star without obvious sharp lines. This limits the grating chip distance you can have although vignetting  will also be an issue if you go too far. 

    Good luck with the project.

    Regards Andrew 

    I thought about that - same thing prism does when used in "grism" configuration. Focusing on different parts of spectrum is a bit more involved, but should provide even better result if one chooses many focus points, and it can be automated to some extend with use of motor focuser as positions / shifts can be both calculated and observed/recorded for future use.

  19. Just now, andrew s said:

    Yes it will F8. If you are seeing limited then it does not matter if it's a compound telescope or not apart from the light loss due to the obstruction. 

    In your SA200 calcs just use 200mm F8. If I remember correctly he main aberration is chromatic coma which along with the star size due to seeing limits the resolution in a slitless converging beam configuration. 

    Regards Andrew 

    Actually main blur is due to field curvature at those parameters. Seeing limit can be "overridden" by grating distance - star profile stays the same in arc seconds and hence pixels, but dispersion can be increased so one can have less nm/pixel and hence less nm / star blur.

    Due to diffraction formula:

    image.png.375643c3ca5d3033a695e3fff8b298e6.png

    Not all rays that would otherwise be focused at the same place (and are in 0 order image) will be bent by the same amount, that causes two issues - spectrum coma and focal point shift. It's a bit like in this image:

    image.png.041856ffc861ba244ebc84ef95c1a1e2.png

    Central ray that is perpendicular gets bent by exact angle. "Left" ray gets bent slightly differently then "Right" ray. They no longer intersect in single point (coma) and they intersect a bit "before" regular focal plane. This point of intersection depends on wavelength so we have a curved line of places of best focus.

    Only one point on the spectrum will be in perfect focus - and others will be slightly defocused (out focus on one side and in focus on the other). According to calculations, in my case that is the "worst" offender.

    That can be compensated to a degree if multiple images of spectrum are obtained with focusing on different parts of spectrum instead on the middle and then composing spectrum out of multiple parts - around point of best focus.

    Anyways, I was under impression that F/ratio of the scope can be different than F/ratio of exit beam in compound scopes, because I see focal length as measure of conversion of angle to linear distance in focal plane, and was not sure if shape of exit beam had anything to do with ratio of focal length and aperture, but it might be that those two are tied together and link is not obvious to me.

  20. 3 minutes ago, andrew s said:

    For SA200 calcs just use the output focal ratio znd main mirror diameter as this is what counts. With my ODK I just used F6.8  and 400mm diameter 

    What I mean is that you can start from the focal point and project back to the secondary mirror usi g the output focal ratio. The diameter there will be less than the secondary mirror diameter. Using this and the  main mirror diameter you can estimate the main mirrors F ratio and focal length. This assumes it is not oversized.

    Regards Andrew 

    But that is exactly my question. I know that F/ratio of my scope is F/8 because it is 200mm / 1600mm scope.

    Does this mean that output focal ratio is going to be the same? It certainly is with refractor and newtonian telescope, but folded designs mean light changes angles (fast primary and magnifying secondary) - and I'm just not sure that output "focal ratio" will be the same as focal ratio of telescope calculated by focal length / aperture.

    Output "focal ratio" is important to SA200 calcs because it produces coma in spectrum and spectrum field curvature.

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.