Jump to content

Stargazers Lounge Uses Cookies

Like most websites, SGL uses cookies in order to deliver a secure, personalised service, to provide social media functions and to analyse our traffic. Continued use of SGL indicates your acceptance of our cookie policy.

  • Announcements

    sgl_imaging_challenge_banner_galaxies.thumb.jpg.d7ac9ec2b53bebbaffa8a76a92120303.jpg

vlaiv

Advanced Members
  • Content count

    897
  • Joined

  • Last visited

Community Reputation

516 Excellent

About vlaiv

  • Rank
    Proto Star

Profile Information

  • Gender
    Male
  • Location
    Novi Sad, Serbia
  1. @swag72 Could you do some comparison between master calibration frames, just to let us know what has changed between them. Probably easiest thing to do is post (new master - old master), it will show the differences nicely. Also what sensor is that?
  2. 163c or 183c?

    For those glasses, I think both cameras are a good choice. It really comes down to your imaging interests. With 163 you will get larger FOV with slightly lower resolution (5.68"/pixel on samyang and 2.24"/pixel on WO71), while 183 will give a bit higher resolution (3.67"/pixel and 1.41"/pixel) at a smaller FOV. Bonus for smaller sensor size is that you can get away without corrective optics in some cases (this means samyang at F/2 will probably be sharp to the edge with 183). Although I have 163 equivalent in mono (ASI1600), and I'm exceptionally happy with it, if I were in your position, I would probably choose 183, but that is simply me, I like to have option at both low and high resolution imaging so 3.67"/pixel and 1.41"/pixel would suit me better than 5.68 and 2.24 combination. On the other hand if you plan to add scope with more focal length to your imaging stable (something like 130pds for resolution of 1.21"/pixel), then go for 163. HTH
  3. Not really sure what you are asking, but if it is how to judge the seeing on a particular night (and hence feasibility of using high resolution setup), here are some guidelines that might help: - first check out seeing forecast to be able to tell in advance if it is going to be good night for high resolution work: https://www.meteoblue.com/en/weather/forecast/seeing/alicante_spain_2521978 Also useful is jet stream forecast (jet stream overhead - crappy seeing): https://www.netweather.tv/charts-and-data/jetstream Not related to seeing, but you might want to check out this as well if going after galaxies: https://atmosphere.copernicus.eu/maps/aerosol-forecasts#1._aerosol_optical_depth_at_550_nm_(provided_by_cams,_the_copernicus_atmosphere_monitoring_service)/4/46.19/9.34 So if forecast for the night is sub 1.5" seeing, then I would say it is worth going for high res stuff. - Next thing to do is take 2 seconds exposure on fairly bright star and check FWHM (using best focus). PHD2 will give you this stats, but it is better to do it with main scope, rather than guide scope / OAG as those tend to present less than perfect star profile and Gaussian fitting might be a bit problematic in that case. Actual star FWHM will differ from seeing FWHM depending on aperture used. In 2s exposure tracking precision error is minimized so you don't need to guide for that - provided you have decent mount that is able to track 2s OK (almost any mount). To be sure take several measurements. Over time using this technique you will get the feel for "good" values for high res work (for that particular setup). Star FWHM will be larger in actual sub because of tracking/guide error of long exposure. See spreadsheet in attachment (libreoffice/openoffice spreadsheet) for calculation of FWHM in final sub based on 1. Aperture, 2. Seeing FWHM (not your measurement, but aperture independent one - like from meteoblue forecast), 3. PHD2 RMS (sigma of gaussian for guide error). Now, final point is somewhat undefined at the moment: Optimum sampling resolution for image with star FWHM. General rule should be 2-3 times FWHM value, so if for example your star FWHM is 3" - you should be good for resolutions in range 1"/pixel - 1.5"/pixel. HTH FWHMCalc.ods
  4. Yes, but beauty of this approach is that I have everything I need to assemble it I did some more research and it turns out that I might be able to do a "fully collimated" beam with this setup. That would eliminate coma completely. Barlow can produce parallel beam from converging beam if it is placed so that focal point of barlow is at same position as focal point of telescope. On cheap x0.5 reducer - it turns out that these work the best on parallel beams - faster cones introduce SA and other aberrations. It looks like cheap achromatic reducers are objective lenses for 25mm binoculars and finders, according to this source: https://www.cloudynights.com/topic/532924-are-gso-05x-focal-reducers-any-good/ I've also found that FL of reducer is around 101-103mm, and I guess that x2 barlow element has something like 70-80mm FL. So it is at least starting point in coming up with some sort of calculations for spacing and distances.
  5. I'm probably on a verge of spamming this section of SGL with my "ideas", but here is another one ... Again topic is how to maximize SA200 effectiveness to achieve best possible resolution. 200 lines/mm is quite "potent" low (maybe to mid?) resolution grating, depending how it is utilized. I'm aware that ultimately seeing will be limiting factor for any sort of slit-less spectrograph. In converging beam we have two "forces fighting" to limit the resolution. One being beam width, or number of illuminated lines, other is the beam angle that results in coma - shallower the beam convergence, more coma is introduced that lowers the resolution. So I asked my self, how to "widen" the beam and make the beam itself more steep. One obvious answer is to use collimating lenses, but I wanted to see if I can use any gear that I already have in my "astro box". Bonus would be if I did not have to purchase or fabricate any sort of adapters to put the whole assembly together. Then idea came - how about following setup: sensor - focal reducer - SA200 - barlow lens - scope So I have all the bits and bobs needed for this setup: All sorts of adapters (1.25" filter thread to T2 thread, various extenders in 1.25", T2 and 2") to put everything together, but with limited options for spacing. I have 0.5x 1.25" reducer (GSO variety), I have 2 nose piece 1.25" barlow elements - x2 GSO and x2.7 APM (coma corrected). Barlow lens, being negative lens, will diverge incoming F/8 beam (8" RC scope) - making into F/16 or even F/21.6 (depending on barlow element used), and reducer element will "reduce" dispersion, so grating can be mounted optically further away from sensor - making beam width larger (at least this is my reasoning, not 100% sure). Problem is of course, that I have no idea how to calculate distances, resulting dispersion, beam width on grating, and in general any of parameters I don't even have the idea if such a setup will properly come to focus. I suspect it will, since RC has quite a large back focus, and it may even happen that focus position is not altered by much (barlow moves focus out, reducer brings it back in, might be that those two cancel out). There will be significant vignetting, but that is not important for point sources like stars (rest of the field will be "strange", but star close to optical axis should be ok even with 1.25" elements). Any thoughts on this?
  6. M51 with a ZWO ASI224

    Yes, unity gain included (it also avoids quantization noise).
  7. Oh, I get it - shape of the line tells us something about star dynamic - did not think of that (but it is very logical - Ha scopes need tuning to account for doppler shift when observing features in motion).
  8. M51 with a ZWO ASI224

    Quantization noise is rather simple to understand, but difficult to model as it does not have any sort of "normal" distribution. Sensor records integer number of electrons per pixel (quantum nature of light and particles). File format that is used to record image also stores integers. If you have unity Gain - meaning e/ADU of 1 - number of electrons get stored correctly. If on the other case you choose non integer conversion factor you introduce noise that has nothing to do with regular noise sources. Here is an example: You record 2e. Your conversion factor is set to 1.3. Value that should be written to the file should be 2x1.3 = 2.6 but file supports only integer values (it is not up to file really, but ADC on sensor that produces 12bit integer values with this camera), so it records 3ADU (closest integer value to 2.6, in reality it might round down instead of using "normal" rounding). But since you have 3 ADU and you used gain of 1.3 what is the actual number of electrons you captured? Is it 2.307692..... (3/1.3) or was it 2e? So just by using non integer gain you introduced 0.3e noise on 2e signal. On the matter of gain formula - it is rather simple. ZWO uses DB scale to denote ADU conversion factor, so Gain 135 is 13.5db gain over the least gain setting. So how much is lowest gain setting then? sqrt(10^1.35) = ~4.73 and here it is on graph: Ok, so if unity is 135 (or 13.5db or 1.35 Bell), we want gains that are multiples of 2. Why multiples of two? In binary representation there is no loss when using powers of two do multiply / divide (similar to decimal system where multiplying with 10 and dividing with 10 only moves decimal dot) - so guaranteed quantization noise free (for higher gains, for lower gains you still have quantization noise because nothing is written after decimal dot). DB system is logarithmic system like magnitude. So if you multiply power / amplitude you add in db-s. 6db gain is roughly x2. (check out https://en.wikipedia.org/wiki/Decibel there is a table of values - look under amplitude column). Since gain with ZWO is in units of 0.1db - 6db is then +60 gain. Btw, you should not worry if gain is not strictly multiple of 2 but only close to it, because closer you are to 2 - quantization noise is introduced for higher values (because of rounding) - and with higher values SNR will be higher (because signal is high already).
  9. Good point, depending on how high resolution the reference spectra is - can do low pass filter on it to make it "softer", and I can choose filter cutoff frequency in such way that it still contains more information than both raw and processed captured spectra. Not really getting this. Maybe because I'm talking about low resolution - R<1000 where no individual lines are resolved to actual shape. I'm interested in seeing if I could overcome seeing induced limit on spectrum resolution. So grating would be theoretically capable of ~R1200, but seeing would limit that to ~R350. Can I recover information and have spectrum resolution of let's say ~R600 using processing? And by recovering resolution I mean - close spectral lines that appear as one "dent" on R350 would appear as two separate "dents" on R600 with "identifiable" central wavelengths, and relative strengths.
  10. M51 with a ZWO ASI224

    Software should be able to pick up stars regardless of the gain - it is SNR that matters, not if star is visible or not on unstretched image. You also don't need to go that high in gain, unity should be ok - not much difference in read noise between unity and high gain. But if you want to exploit that last bit of lower read noise, use gain that will give you multiples of 2 of ADU to avoid quantization noise. That would be 135 + n*60 so 135, 195, 255, ...
  11. I just love to re invent wheel , kind of my specialty I was not really after "improving the look" of the curve, but rather interested in having a go at actually improving the resolution of spectrum. I think it would be interesting to see if one can do that. Math supports it, and techniques like RL deconvolution have such property that they preserve flux for example, so we are not talking about pure cosmetic alterations here. Personally, I think wavelets when used to do multi-frequency analysis might be a better choice than RL deconvolution, but have no idea of Math properties of such transform (other than it can boost attenuated high frequency components). I have simple idea how to test it. Good thing about full aperture grating is that dominant aberration will be seeing PSF (well actually Airy PSF convolved with seeing + tracking error Gaussian) which for long expsure can be well approximated with Gaussian shape - and Gaussian shape and its Fourier transform are well understood, so we have the idea which frequencies need to be boosted. So test would go like this: Record spectrum and calibrate it using standard calibration method without any alterations and compare it to reference high resolution spectrum of that star using some metric - like RMS error, or similar. Then process that spectrum in certain way and again compare to high resolution reference spectrum using same metric to see if we lowered the error and/or introduced some kind of unwanted artifacts. I also have couple more ideas to test out. Like stacking spectrum images vs stacking extracted spectral data (in 1D) - I suspect that later will help with artifacts introduced if spectrum is not sampled aligned to sensor pixel matrix (but at certain angle) - some dithering would help here between exposures. Then there is matter of center line vs offset spectrum extraction - in Gaussian PSF blurred data - center line is most blurred, while offset will have lower SNR due to less signal getting there.
  12. @Merlin66 Since above clearly shows that spectrum will be seeing limited, have you ever tried some sort of frequency restoration method to "sharpen" seeing limited spectrum? Deconvolution or maybe simple wavelet processing? I don't see a reason why it would not work if we assume gaussian PSF for long exposure subs, and one stack multiple frames to get good SNR of final result.
  13. Oh, new version of that spread sheet, thanks for that. I was using V2.1 to do calculations for different configurations with SA200. This one has objective grating section as well. According to spreadsheet, with 6 lines/mm and 200mm aperture I'll be seeing limited in resolution. Total of 1200 lines would give me theoretical resolution of R1200, but star size for 2" FWHM would limit me to R384 if I use native F/8. Dispersion would be 3.9 A/pixel - which is quite enough to record 15.6A resolution (even binned to boost SNR). I'm going to see what difference makes using reducer, not sure if it will make any since it is objective grating - star size will be smaller, but so would be sampling resolution as well. Yes, not much difference, so no point in using focal reducer (except for boosting SNR of spectrum), but at a small cost of spectrum resolution - R343 with reducer. I have found a paper that describes using just the approach I came up with - printing with laser printer on overhead projector clear film. Author also concluded that 6 lines per mm is achievable with 600dpi laser printers. Here is reference: http://aapt.scitation.org/doi/abs/10.1119/1.2768688 and also from abstract: "A standard laser printer can print black lines (separated by a white line) at 60 black lines/cm (about 150 lines/in), which is a small enough spacing to produce a crude diffraction grating [see Fig. 1(a)] that is sufficient for the physics inquiry activities described in this paper" (not sure what that paper is about, I just read abstract and found confirmation that printed grating will work). So it looks like this could indeed be viable solution for ultra cheap way into spectroscopy. I just ran numbers for smaller scope - something like 80mm F/6, and results are still good - still seeing limited - R227 for 2" seeing, again using same 6 lines per mm grating. Probably only drawback of this method is efficiency - grating will not be blazed, so light will be equally spread on both sides of source.
  14. After some more thinking about it, variability in stellar output would not be a problem for a pair of satellites orbiting with 180 degrees of shift in some sort of solar orbit (might even be larger than earth orbit for added precision). If both satellites record photometric data at the same time - that would circumvent the problem of changing stellar luminance over time. So it might be feasible for a space mission. Thanks for info. Will look into Kepler mission data to see if anything useful for this can be obtained.
  15. Good point, either long time observation would be required to build respective light output curve, but I suspect that would not help either, since there is variability in stellar cycle and that would probably be more than measured difference.
×