Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

vlaiv

Members
  • Posts

    13,030
  • Joined

  • Last visited

  • Days Won

    11

Everything posted by vlaiv

  1. Indeed, no benefit in doing it on chip, except for file size / download time. Doing it in software leaves much more possibilities for data manipulation - like hot pixel exclusion, optimized binning based on SNR (examining master dark for statistics will tell you which pixels contain how much noise - there are more noisy and less noisy pixels and you can bin adaptively). You can also do "split" bin instead of regular (better option for preserving sharpness) and of course fractional binning.
  2. That cable does look like something that will do the job - its made of just the right materials to give it flexibility.
  3. For that you want Cat-5e cable, UTP variety, stranded. Cat-6 is usually shielded and more tightly wound - which makes it firmer. With really flexible cable, you might only reach 100mbps speeds, even if your network gear is gigabit. This is because all the "tech" used to enable high speeds will make cable stiffer - like more tightly wound pairs, shielding between twisted pairs and full shielding (prevents interference / crosstalk and such - thus enabling higher speeds).
  4. I was just about to ask that, but yes, good point, with angled beam, flat taken without SA will correct only zero order dust shadows - maybe handy for background subtraction, although background should be uniform so it's just an offset and not pattern.
  5. I can offer alternative explanations: 1. Over time, one tends to gather more data, so both SNR and processing skills increase. Microlensing artifact is in fact light signal, so it will be more obvious if you have better SNR data or you improved your processing to push the data further. Don't really think it will worsen on its own - it would mean that composition of coatings is changing or maybe coatings are getting thinner or something. 2. If they are in fact artifacts of micro lens, then resolution should not have impact on it, or at least it should have opposite effect of what you are describing. Star will be tighter on lower resolution, so light will be more concentrated, and that might lead to stronger reflection / diffraction issues, hence stronger micro lens artifact. There is another explanation that can explain difference between two scopes - speed of light beam. I think that there is strong dependence on both wavelength of light and speed of light beam for particular setup. One setup with for example F/5 beam could have issues in Ha and no issues in OIII, while other setup with F/7 beam could have quite opposite - Ha without issues and OIII with issues for example.
  6. I'm sort of worried by pixel to pixel differences of my sensor (ASI1600), as I've noticed particular pattern - checkerboard pattern is present for some reason. QE is not varied greatly (1-2%) and with regular imaging flat calibration works well, but like you said - here it can be a bit of a problem. I'll try different approaches to see how much difference I get in spectrum. Here is "zoomed in" section of my regular flat for this sensor (note it is in fact mono sensor and what you are seeing is not bayer pattern): Pixel to pixel variation is real and not noise - I used x256 flat and flat dark subs to create it, and each sub has SNR of about 50 or so, stack has SNR of about 850-900. Variations are, on other hand, few percent (about order of magnitude larger than any residual noise). Here is "profile" of the flat (flat has been scaled to relative QE): I think there are at least couple of "wavelengths" present here - one very fine on pixel level, and one maybe 4-5 pixels in wavelength. In any way, I think it will result in strange pattern in spectrum even if I over sample (and I will by factor of at least x2-x3 in order to get good dispersion and minimize seeing effects, I can bin data later to improve SNR back a bit). I will certainly take flats, and I'll make sure that I place zero order star at roughly the same position for each measurement (SGP does have crosshair in preview so I can use that). After that I can try both with and without flat calibration to see which one works better.
  7. While we are on this topic, hope no one minds me asking another question related to spectrum calibration. How do I do flat calibration? Same as with any imaging - just place flat panel on scope with the same gear arrangement (focus included) and I take subs? I divide spectrum image after dark calibration as normal? I'm asking because there is issue of vignetting - spectrum can "overlap" parts of "zero order" of vignetting and flats in this case will maybe produce wrong results? Or maybe it does not matter at all, and we flat calibrate to eliminate pixel to pixel QE variations and dust on sensor / any filters after SA? I assume that it might not matter because any vignetting will be incorporated in system response, but that means that recording needs to be done with zero order image at exact same spot each time (for both reference star and one that we are measuring)? It also means that spectrum orientation should remain the same with regards to sensor (but not necessarily to star field - as we can rotate whole assembly to get spectrum clear of background artifacts) - that is probably better as we want spectrum to be horizontal anyway, I guess. Anyone tried background removal procedure with "clear exposure"? I understand that it can be tricky as SA needs to be removed from optical train which can lead to slight focus change, and one needs to do linear fit on zero order stars to match intensity. I guess one can monitor FWHM / HFR to get roughly the same defocus of zero order image.
  8. Yes, mono is better option. You will still need to do camera response calibration but it will be smoother without sudden dips.
  9. I did not say that - I've mentioned that it is in fact camera response (transition from green to red) and that there is another one like that, but less pronounced - transition from blue to green.
  10. Not sure why you are saying it is misleading? You are in fact correct than S/N per surface area is not changed, but point of binning is to trade "pixel surface area" for S/N improvement - one gets coarser sampling rate by doing this. I did point out that I rescaled unbinned shot to make creating composite easier and to make it easier for people to see the real improvement in S/N. I also made it clear how I did it - by in fact "throwing away" 3/4 of the data, but in doing so I did downsample without affecting "per pixel" SNR of original image, I did not introduce pixel-to-pixel correlation, nor did I affect sharpness of the image apart from any removal of high frequency signal due to using coarser sampling (which in fact does not happen in this image since image is oversampled as is).
  11. Since you are using OSC camera, you will have two major "dips" in camera response - R-G transition and G-B transition. The one you marked is R-G transition. Here is "generic" response curve for that sensor: So first dip B-G is at about 480nm, and second is at about 580nm In your spectrum, you can also see first dip but it is not as pronounced: In any case, way you create instrument response (both sensor and optics if coatings don't have uniform frequency response, and often they don't) is to take your spectrum and divide it with reference spectrum of that star. This means that you should record the spectrum of a known star - one that you have reference spectrum of. You will also need to "smooth" result or "smooth" reference spectrum if it is more detailed than your spectrum, otherwise features of reference spectrum not recorded in your spectrum will be part of instrument response (but they are not). Here is video tutorial for RSpec that describes that process (RSpec is used to make the video - but you can figure out important steps and reproduce those in software of your choice): https://www.rspec-astro.com/videos/InstrumentResponse/InstrumentResponse.mp4
  12. Actually you can if you are not careful. OAG prism contains straight edges, and if you push prism too far into light path so that it casts a shadow on sensor, which can be corrected with flats - it will cause single spike perpendicular to OAG edge that is causing (not X type like in reflectors with spider support). Here is an example of it: Or maybe this one shows it better: or this one I was not careful enough, so OAG prism did shadow a bit of sensor, and only stars in that part of frame were affected (and only bright ones). Luckily this can easily be sorted out by moving prism out of the light path that hits main sensor.
  13. Yes, after calibration and before stacking. I don't use PI, so can't tell if there is option for batch, but I don't see why it would not work.
  14. No. Don't think in terms of increasing signal and increasing noise. Think in terms of ratio of the two. Depending on the type of stacking / binning you are using, signal can increase or stay the same. If you are adding things - signal will increase. If you are averaging things - it will stay the same. Regardless of what method you choose, SNR improvement is the same. Read noise is not only source of noise, and if you properly expose your subs - it should have smallest impact (or rather - your sub duration should be such that read noise is no longer dominant source of noise). Therefore binning improves SNR regardless if effective read noise is increased in case of software binning. In fact - you should not look at it that way, with software binning - read noise "remains the same", but hardware binning is slightly reducing read noise. Both however improve signal to noise ratio. Depending what software are you using to process your data, software binning (true binning) can have different "name". For example if you use PixInsight - software binning is called "Integer Resample". You should use integer resample with average method for binning your data. Here is another "synthetic" example: This montage is composed out of 4 panels - top right is original panel with gaussian noise with standard deviation of 1. Other panels are produced by: 1. binning 2x2 - average method 2. Simple bicubic interpolation 3. Cubic-O-MOMS interpolation Here are results of measurement of noise in each sub: First is reference sub, and it has StdDev of 1 (noise with value 1). Next one is binning 2x2 - it results in very predictable improvement of x2 (noise is less by a half or - 0.499....). Simple bicubic interpolation reduces noise even further to 0.4, or by factor of about x2.5, but in doing so creates correlation between pixels and slightly blurs the image. Cubic O-Moms is last, has the least SNR improvement by factor of x1.25, but should be the "sharpest" method. In this example I did not examine impact on sharpness on the image - just on the noise. However, you can clearly see couple of important points: 1. Binning produces exact reduction in noise by factor of two (if signal remains the same - average method) 2. Quality resampling method like Cubic O-MOMS has less of SNR improvement (in this case factor of x1.25) 3. Every downsampling method does improve SNR to some extent, and some even more than binning - at expense of resolution (not seen in this example, but that is what happens).
  15. Ok, here is sort of as simple as it can get explanation of binning, and related CMOS vs CCD thing. - Binning increases SNR in the same way stacking does (or rather very similarly, there are small differences not important to this discussion). When you stack 4 images - you are adding / averaging pixel values between those images. Binning is adding / averaging 4 pixel values in 2x2 matrix - in principle same thing. So binning is always increasing SNR of recorded data predictably - bin 2x2 you increase SNR by factor of 2 (same as stacking 4 subs). Bin 3x3 - you increase SNR by 3 (same as stacking 9 subs, as you are adding/averaging 9 pixels) - Difference between hardware binning (CCD) and software binning (CMOS) is what recorded data is being binned. With CMOS you are binning completely read out data, while with CCD you are binning electrons prior to them being read out. There is subtle difference between the two - CMOS binning is after each value had read noise added to it, while CCD binning is done prior to analog-digital conversion and read noise is added to binned value. Difference between the two is - hardware binned pixel (or group of 2x2 pixels) has regular level of read noise - same as single pixel. Software binned pixel has twice higher read noise than single pixel. In another words - if CCD camera has read noise of 5e, when you bin 2x2 and get sub with less pixels (and higher SNR) it will also have read noise of 5e. On the other hand if CMOS sensor has read noise of let's say 1.7e (like ASI1600), after you bin it - it will behave as camera with larger pixels but each pixel having 3.4e read noise. That is the only difference between software and hardware binning (if we bin in "regular" way, software binning can have certain advantage related to pixel blur, but that is "advanced" topic). - as for resampling, binning is a sort of resampling, but it is not only way to resample image. There are couple of things that resampling or rather downsampling does to your data: it creates less pixels covering same FOV - hence it reduces sampling rate, it changes SNR of your data and has some effect on pixel blur and possibly introduces correlation between pixels (last two have small impact on sharpness of the image). Different resampling methods give different results in all of these except producing same sampling rate. Binning is predictable - it creates larger pixel so SNR increase is known, and it adds pixel blur. It does not add correlation between pixel values. Other forms of resampling can add less of pixel blur so image will be slightly sharper, but SNR increase is less (sometimes even more, at expense of noise statistics because of correlation) and there will be correlation between pixels (that has to do with noise statistics of the image). I regularly bin my data in software when I record it with 8" RC (1624mm FL) and ASI1600. Native sampling rate is around 0.5"/px and that is of course way too much. Sometimes I bin it 2x2 and sometimes 3x3 - depending on conditions of the session. Here is an example of binned image vs unbinned image that clearly shows improvement in SNR: This is single 60s uncalibrated sub of M51 taken with above gear. I created a copy of that sub, and with first copy I did software bin 2x2, while with other one I did "downsample" in particular way - one that really does nothing to the image except change sampling rate - so no correlation between pixels, no pixel blur and no SNR change (I simply took every other pixel in X and Y and formed image out of those - no addition, no interpolation, just x2 lower sampling rate to match the scale of binned image). Then I pasted part of second copy over first copy and did basic linear stretch to show result in such way that can be easily compared. One simply cannot not notice improvement in SNR (it is x2 improvement in SNR). Btw, above image is 50% scaled down, but here is the same image at 1:1 zoom: (notice that scaling down for display, which also uses downsampling, improves SNR a bit, so scaled down version above does look smoother than this one at 1:1, but in order to really compare results - we need to measure resulting noise in different approaches to determine how each one behaves - I can do that for you as well if you want - make composite image of 4 different approaches - bin x2, above splitting of pixels, simple resample, and advanced resample technique and do some measurements of noise - as well as star FWHM so you can see the impact on resolution).
  16. It is interesting - not sure what the light intensity output is, but it does get rather expensive when the size goes up.
  17. Don't know - I'm getting error on that ebay page ...
  18. I think that simplest DIY flat box consists of following: 1. Acrylic light diffuser panel Do google search, these should be fairly cheap and you can get them cut to size. 2. LED strip 3. Some sort of box to put it in. 4. optionally you can get PWM dimmer for led strip
  19. I believe it indeed was the case. I took that quite a long time ago, and realized that one should focus on spectrum some time after that (if I recall correctly), so that is a good chance I was not aware when I took it.
  20. No specific project in mind. Actually there is - more of a challenge then a project. Most of things to date were just theoretical, and recent discussion fueled my interest in actually going out and capturing some spectra. I've not used my SA200 much so far. One failed capture of spectrum (at the time I had no idea how to use it properly) - very poorly focused, I'll attach profile of capture without any processing - just to show that almost no features are visible. Did observe spectra of stars on couple of occasions. I want to try to do star classification for purpose of determining approximate distance to stars (photometry and stellar class), but for the time being will settle just at practicing and capturing some decent spectra. Given the time it takes to setup everything, I want to make the most out of the session, instead of just capturing single spectrum (and reference for calibration) - that means a list of techniques to try out, and list of stars. Next will obviously be to see what sort of resolution can I realistically reach, by using different approaches - might even have a go with different scope - F/10 achromat stopped down a bit - it will have less light grasp, and focusing will be even more tricky due to different focus positions across the spectrum, but multiple focus positions should take care of that as well. It does offer very good theoretical resolution - over R600 (if we exclude effects of field curvature by means of stepped focusing). All of this is really planning stage for some time under stars (which have been scarce lately, and not only due to poor weather, most of the time I can't be bothered to setup everything for some reason, so hopefully this fun activity will spark interest again). Here is "failed" spectrum, I don't even remember gear I used to capture it, but I do know it was color camera. I did max stack of channels to produce mono image, but transitions between colors can be clearly seen in outline: Can't really say I'm seeing any features in it
  21. Although we are now diverging from main topic - which was about angles of exit beam, I'll just comment on SA200 part. In fact, in the light of a new day, I think that my original question for this thread was sort of silly. I'm don't quite understand it yet, but it sort of stands to reason that exit beam must be of exact shape as F/ratio of the scope even in folded design - because focal length is function of geometry (and aperture stays the same). Back to SA200. @Merlin66, this all in fact started by me examining trans spec and wondering why one can't have more resolution than about R200 with grating (as stated in recent thread in spectroscopy section). That lead me to finally understand coma and focus issues (which are consequence of grating equation and the fact that parts of beam are not perpendicular to grating), and after knowing all of that, I managed to "tweak" parameters to get much better resolution of spectrum. "Splitting" spectra by different focus points and then combining them should improve things further. Here is example of calc for my 8" F/8: I outlined worst offender - field curvature of the grating. By using multiple focus points (like say 4-5, and doing parts of spectrum separately and then joining them at the end), next problem will be coma. If I change calc so it does not include focus issues, here is resolution that I'm getting: We can do a bit more tweaking to further improve things - add more dispersion to bring down impact of seeing (at expense of SNR, but that can be sorted with longer integration time / stacking / binning) and controlling coma with aperture mask: Now ~R500 is quite decent for simple device like SA200. I thought about that - same thing prism does when used in "grism" configuration. Focusing on different parts of spectrum is a bit more involved, but should provide even better result if one chooses many focus points, and it can be automated to some extend with use of motor focuser as positions / shifts can be both calculated and observed/recorded for future use.
  22. Actually main blur is due to field curvature at those parameters. Seeing limit can be "overridden" by grating distance - star profile stays the same in arc seconds and hence pixels, but dispersion can be increased so one can have less nm/pixel and hence less nm / star blur. Due to diffraction formula: Not all rays that would otherwise be focused at the same place (and are in 0 order image) will be bent by the same amount, that causes two issues - spectrum coma and focal point shift. It's a bit like in this image: Central ray that is perpendicular gets bent by exact angle. "Left" ray gets bent slightly differently then "Right" ray. They no longer intersect in single point (coma) and they intersect a bit "before" regular focal plane. This point of intersection depends on wavelength so we have a curved line of places of best focus. Only one point on the spectrum will be in perfect focus - and others will be slightly defocused (out focus on one side and in focus on the other). According to calculations, in my case that is the "worst" offender. That can be compensated to a degree if multiple images of spectrum are obtained with focusing on different parts of spectrum instead on the middle and then composing spectrum out of multiple parts - around point of best focus. Anyways, I was under impression that F/ratio of the scope can be different than F/ratio of exit beam in compound scopes, because I see focal length as measure of conversion of angle to linear distance in focal plane, and was not sure if shape of exit beam had anything to do with ratio of focal length and aperture, but it might be that those two are tied together and link is not obvious to me.
  23. But that is exactly my question. I know that F/ratio of my scope is F/8 because it is 200mm / 1600mm scope. Does this mean that output focal ratio is going to be the same? It certainly is with refractor and newtonian telescope, but folded designs mean light changes angles (fast primary and magnifying secondary) - and I'm just not sure that output "focal ratio" will be the same as focal ratio of telescope calculated by focal length / aperture. Output "focal ratio" is important to SA200 calcs because it produces coma in spectrum and spectrum field curvature.
  24. Can you please be more specific - not sure what to do or even where to start.
  25. That is a very good point, I would need F/ratio of primary mirror and exact spacing between primary and secondary - to calculate illuminated diameter on secondary, and then secondary to focal plane distance to see the shape of the beam - that of course being approximation since secondary is not flat, but curved (but slightly, and it would not change things much). However, I don't have those for my scope (maybe 8" F/8 designs don't differ, and I'll be able to find info online). I wanted to do some calculations for SA200 and F/ratio of beam is important for coma and curvature (finally figured why and how) - and slower is better, but I wondered if beam is in fact slower than F/ratio of the scope would suggest. Maybe easiest way to explain the question, given F/8 RC telescope, is marked ratio 8 as well: (that ratio being almost the same as illuminated diameter on secondary - illumination coming from rays parallel to optical axis - and secondary to focal plane distance).
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.