Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

vlaiv

Members
  • Posts

    13,030
  • Joined

  • Last visited

  • Days Won

    11

Everything posted by vlaiv

  1. Just a couple of points: I think that it is better to aim for lines that are more than single point thick - I'm not sure if line that is made from single printed point is going to be strait enough or possibly have holes in it or something. Go for 4-5 dots per line instead. That will give you 10 lines per mm in case of 2400dpi (~90 dots per mm - 9 dots per line + space for 10 lines per mm, 4.5 dots line width). Grating resolution is determined by total count of lines, so 10 lines per mm can sound very low, but put it on 80mm objective and you will have R800 resolution from it. So depending on aperture, few lines per mm is not such a bad thing in terms of resolution. Another thing to consider is dispersion. More lines per mm you have, higher dispersion, or angle of diffracted light. This is important because it helps you determine how spread the spectrum will be on sensor. Here again you don't want very large dispersion in objective grating because you will need wide field instrument in order to record it. For example SA100 has first order spectrum of 550nm light at angle of ~3.15 degrees (if I'm correct with this calculation). So you need very short focal length instrument capable of wide field in order to record it. My TS80 with ASI1600 has 2"/pixel and it would take 5670 pixels just to record 0 order to 550nm (ASI1600 does not have that many pixels ). But let's look at dispersion for 900nm and 10 lines per mm grating. Again if my calculation abilities are up to task, that would give us something like half a degree. That means that I would need about 900 pixels on my ASI1600 + TS80 to record full spectrum - from 0 to 900nm. This again is not really the best option, as sampling would lower achievable resolution - in this case to something like 2nm. So for 10 lines per mm I would probably use RC8" as it gives me 0.5"/pixel with ASI1600. 0-900nm would cover in this case 3600px (camera has 4600px in width). Sampling resolution would be 5A (pretty good), and the whole system would be seeing limited. So one can fiddle with these numbers to get best match for their instrument (focal length, sensor size and resolution, etc). TransSpec v3.1 that Merlin66 attached above, gives you spreadsheet to play with some numbers, it has objective grating tab, but it is initially geared towards SA100 used as objective grating on short FL DSLR lens. By changing numbers you can see different calculations for this type of printed grating as well. Btw I just realized that Bahtinov mask is very low resolution objective grating And it really works well, here we can see at least 5 orders, and mask it self has grating that has many mm spacing.
  2. Can't really tell if it is worth it. I do my own testing - Roddier analysis, and so far I've been happy with quality of my optics. I have 8" dob, which showed 0.8 or greater system strehl (so primary is probably quite better than 0.8). It is mass produced Skywatcher variety. Quite happy with the scope. I also did test on RC8" from teleskop express. It is 0.94 or better system strehl. Again really pleased with scope. TS Optics Photoline 80mm F/6 apo shown 0.8 or greater in blue part of spectrum, 0.94 or greater in green and 0.98 in red part of spectrum. With today's technology and manufacturing process, I think that it is very unlikely that you will end up with a bad sample. So I'm fairly confident when purchasing scopes at this time that I'll get quite usable instrument (for that sort of money). I will be able to tell if scope is below usable optical quality and if it should be returned. In my view it is worth ordering additional tests as a means of insurance that you don't need to return scope and don't even want to consider such possibility.
  3. Well it looks like I'm going to have a chance at taking a peek thru your new FC 100 DF (I guess that is new scope? ) if you are planning to come to Messier Marathon on 14th/15th?
  4. Hi, no not yet. Did not get much chance to observe this winter, weather was bad so purchase is postponed until further notice TS photoline 102 F/7 is still main candidate for me, I have 80mm F/6 apo triplet (also photoline) but I use that only for imaging. For visual I wanted something with a bit more light grasp and a bit more FL. I'll probably do sort of review with testing when I purchase scope, so you will get the chance to see my impressions, unless of course, you make your purchase first.
  5. I would say that for 10 bias stack result of 0.011 is pretty close to 0 (if you increase number of bias subs per stack - value should get closer to 0). I also think based on this that ASI224 has a stable bias - meaning that for same settings you will get good bias file each time (difference close to 0 just means that there is no difference in bias signal between two different sessions). So you can use bias for calibration (although it is not really needed). You can do one more test, just select your difference frame, go into Image/Adjust/Brightness contrast, and set it like this: So that min slider and max slider sort of bracket main part of histogram. This will make all features in difference distinct. It should be just noisy with no particular features. In the image above, there is for example visible vertical and horizontal banding - that is something that is characteristic for CMOS sensors, and it shows that read noise is not quite gaussian in distribution, but given enough subs it should not show in final image (both calibration subs and light subs).
  6. You can do that very simply with ImageJ (open source), here is workflow: Choose details if you need to: It will open stack of images, then do: After that: and select: This will create first master. Close opened stack of images, open second stack of images and do same process until you get second master. At the end do: with these options: It will create difference of two masters. Select that image and do: And you should get something like this:
  7. I'm having a bit trouble following your work flow - just because I'm not used to PI screens. If I'm reading screens above correctly you are concerned about image below point 6? Or left part of it - representing difference between two bias stacks? For the test, best thing to do is just to make stack with straight average of subs, so avoid any sort of sigma clip, or cosmetic corrections (hot/cold pixel rejection, whatever). Then do simple arithmetic of subtracting two images. Now image you produced (in similar fashion) - shows something that can be considered as bad - vertical streak. You want your difference to be pure uniform noise, and if you do average value of all pixels in difference you want them to be very close to 0 (it will not be exactly 0 but very very close, something like 0.00000003 or something like that). If you get any other value than very close to 0 then bias is not stable. You can check if darks are stable. Use same approach here - do two stacks of darks at same temperature and of same duration. If these turn out to be ok (as for example is case with ASI1600) then it is all ok, just don't use bias frames. You can get all the functionality that you need, like exact photon/electron count - by using light calibrated by subtracting master dark (composed out of dark frames without subtracting bias) and dividing by flat (normalized flat if you want to do photometry or something).
  8. Collimating RC is not that hard. First thing is to identify what needs collimating. I can help you with some basics, but there are a few videos on youtube that will explain it a bit better. First thing is to see if star elongation is due to wrong collimation or there is something else at play. Inspect your subs, rotate camera then inspect subs again. If in each sub star elongation is pointing in the same direction across the field and it rotates when you rotate your camera - then you might be having a mount / guiding problem. 14" RC has a lot of FL, so you might be imaging at very high resolution - any sort of guiding / PA error will show, and show a lot. So if you happen to have above condition, examine which direction aligns with star elongation - if it is DEC, then you need better PA, if it is RA, you might be having problem with guiding (this would mean that your periodic error is not guided out completely). If you have round stars in one corner, and elongation in other corners it can be collimation related, and to do collimation do following: Step 1: Put star in center of the field and defocus it. Make sure doughnut is concentric - this you will achieve by collimating secondary. Step 2: focus the star in center of the field and then using some sort of aid check how much out of focus it is in corners (don't change focus, just slew scope and take frame and measure FWHM of star, or put Bahtinov mask on and look at defocus). At this point you can figure out if you need to collimate primary or focuser, depending how much defocus is there in corner stars. This is a bit tricky to get right, but software like CCD Inspector can help. If there is linear gradient in defocus (for example two top corners have same amount of defocus, and two bottom corners have same amount of defocus but different one to top corners) - you need to fix the tilt and that is done by collimating focuser. If on the other hand you have "bowl" like distribution that is not centered on frame center - then you need to collimate primary. After collimating primary you need to go to step 1 and repeat. After collimating focuser / tilt you don't need to do it. So it is best to leave tilt collimation as last, and only if you do indeed have tilt. Here is a good guide that will help you out: https://deepspaceplace.com/gso8rccollimate.php
  9. And there I was believing we have a convert given your recent experience with 6" F/6 Newtonian
  10. Interestingly enough, most of the people in this thread while standing up in defense of refracting telescope design (me included) did little to counter actual arguments presented in article. Most of the things listed in article are in fact true. I think that it would be in best interest of OP and general community that participants of this discussion either state exact disagreement with particular point made in article, or provide alternative view to why refractors are indeed good (particular use case or even personal preference). I would not focus my attention to author either, everyone has a right to voice their opinion and their particular style might not suit us well, but we should be able to distinguish their preferences / views to actual claims (which we can subject to counter argument).
  11. I agree with most of the article, except the title - that one is 100% wrong Does that mean that I don't have and enjoy fracs? Noo ... have two of them, and looking at third (will still be holding on to two, SW ST102 will have to give way ...). Do I have a dob? Yes, and RC also. While most of things listed in article are true, that does not mean that refracting telescopes are no good. Even achromatic refractors. I don't think that anyone would be displeased with 4" F/10 achromat on AZ4. Both for deep sky and solar system. Well if you enjoy observing and you are not after more, better, gooder ...
  12. Honestly, I don't quite understand what you said But I would like to point something out. 2.4 pixels per Airy disk radius is theoretical optimum sampling value based on ideal seeing and ability to do frequency restoration for those frequencies that are attenuated to 0.01 of their original value. This means good SNR (like 50-100 SNR) to be able to do that, as well as good processing tools that enable one to do it. In real life scenario, seeing will cause additional attenuation of frequencies (but not cut off like Airy pattern), that is combined with Airy pattern attenuation and cut off. So while ideal sampling will allow one to capture all potential information - it is not guaranteed that all information will be captured. On the other hand, I just had a discussion with Avani in this thread: where I performed simple experiment on his wonderful image of Jupiter taken at F/22 with ASI290 to show that same amount of detail could be captured with F/11 with this camera. He confirmed that by taking another image at F/11, but said that for his workflow and processing he prefers using F/22 as it gives him material that is easier to work with. So while theoretical value is correct, sometimes people will benefit from lower sampling - if seeing is poor, and sometimes people will benefit from higher sampling simply because post processing and tools better handle such data. So bottom line to this is that one should not try to achieve theoretical sampling value at all costs. It is still good guideline but everyone should do a bit experimenting (if gear allows for it) to find what they see as best sampling resolution for their conditions - seeing and also tools and processing workflow.
  13. Dawes limit and Rayleigh criterion are based on two point sources separated so that second is located at first minimum of airy pattern. It is usable for visual with relatively closely matched intensity of sources (like double stars). Applying twice sampling to that criterion is really not what Nyquist theorem is about. There are two problems with it: 1. It does not take into account that we are discussing imaging - and we have certain algorithms at our disposal to restore information in blurred image 2. It "operates" in spatial domain. Nyquist theorem is in frequency domain. Point sources transform to rather different frequency representation even without being diffracted to airy disk (one can think of them as being delta functions). Proper way to address this problem is to look at what airy disk pattern is doing in frequency domain (MTF) and based on that choose optimum sampling.
  14. In some of my "explorations" into this subject, I came with slightly different figure than usually assumed. Instead of using x3 in given formula, value that should be used according to my research is 2.4 So for camera with 2.9um, optimum resolution for green light would be F/11.2 Here is original thread for reference:
  15. Yes, unity gain included (it also avoids quantization noise).
  16. Oh, I get it - shape of the line tells us something about star dynamic - did not think of that (but it is very logical - Ha scopes need tuning to account for doppler shift when observing features in motion).
  17. Quantization noise is rather simple to understand, but difficult to model as it does not have any sort of "normal" distribution. Sensor records integer number of electrons per pixel (quantum nature of light and particles). File format that is used to record image also stores integers. If you have unity Gain - meaning e/ADU of 1 - number of electrons get stored correctly. If on the other case you choose non integer conversion factor you introduce noise that has nothing to do with regular noise sources. Here is an example: You record 2e. Your conversion factor is set to 1.3. Value that should be written to the file should be 2x1.3 = 2.6 but file supports only integer values (it is not up to file really, but ADC on sensor that produces 12bit integer values with this camera), so it records 3ADU (closest integer value to 2.6, in reality it might round down instead of using "normal" rounding). But since you have 3 ADU and you used gain of 1.3 what is the actual number of electrons you captured? Is it 2.307692..... (3/1.3) or was it 2e? So just by using non integer gain you introduced 0.3e noise on 2e signal. On the matter of gain formula - it is rather simple. ZWO uses DB scale to denote ADU conversion factor, so Gain 135 is 13.5db gain over the least gain setting. So how much is lowest gain setting then? sqrt(10^1.35) = ~4.73 and here it is on graph: Ok, so if unity is 135 (or 13.5db or 1.35 Bell), we want gains that are multiples of 2. Why multiples of two? In binary representation there is no loss when using powers of two do multiply / divide (similar to decimal system where multiplying with 10 and dividing with 10 only moves decimal dot) - so guaranteed quantization noise free (for higher gains, for lower gains you still have quantization noise because nothing is written after decimal dot). DB system is logarithmic system like magnitude. So if you multiply power / amplitude you add in db-s. 6db gain is roughly x2. (check out https://en.wikipedia.org/wiki/Decibel there is a table of values - look under amplitude column). Since gain with ZWO is in units of 0.1db - 6db is then +60 gain. Btw, you should not worry if gain is not strictly multiple of 2 but only close to it, because closer you are to 2 - quantization noise is introduced for higher values (because of rounding) - and with higher values SNR will be higher (because signal is high already).
  18. Good point, depending on how high resolution the reference spectra is - can do low pass filter on it to make it "softer", and I can choose filter cutoff frequency in such way that it still contains more information than both raw and processed captured spectra. Not really getting this. Maybe because I'm talking about low resolution - R<1000 where no individual lines are resolved to actual shape. I'm interested in seeing if I could overcome seeing induced limit on spectrum resolution. So grating would be theoretically capable of ~R1200, but seeing would limit that to ~R350. Can I recover information and have spectrum resolution of let's say ~R600 using processing? And by recovering resolution I mean - close spectral lines that appear as one "dent" on R350 would appear as two separate "dents" on R600 with "identifiable" central wavelengths, and relative strengths.
  19. Software should be able to pick up stars regardless of the gain - it is SNR that matters, not if star is visible or not on unstretched image. You also don't need to go that high in gain, unity should be ok - not much difference in read noise between unity and high gain. But if you want to exploit that last bit of lower read noise, use gain that will give you multiples of 2 of ADU to avoid quantization noise. That would be 135 + n*60 so 135, 195, 255, ...
  20. I just love to re invent wheel , kind of my specialty I was not really after "improving the look" of the curve, but rather interested in having a go at actually improving the resolution of spectrum. I think it would be interesting to see if one can do that. Math supports it, and techniques like RL deconvolution have such property that they preserve flux for example, so we are not talking about pure cosmetic alterations here. Personally, I think wavelets when used to do multi-frequency analysis might be a better choice than RL deconvolution, but have no idea of Math properties of such transform (other than it can boost attenuated high frequency components). I have simple idea how to test it. Good thing about full aperture grating is that dominant aberration will be seeing PSF (well actually Airy PSF convolved with seeing + tracking error Gaussian) which for long expsure can be well approximated with Gaussian shape - and Gaussian shape and its Fourier transform are well understood, so we have the idea which frequencies need to be boosted. So test would go like this: Record spectrum and calibrate it using standard calibration method without any alterations and compare it to reference high resolution spectrum of that star using some metric - like RMS error, or similar. Then process that spectrum in certain way and again compare to high resolution reference spectrum using same metric to see if we lowered the error and/or introduced some kind of unwanted artifacts. I also have couple more ideas to test out. Like stacking spectrum images vs stacking extracted spectral data (in 1D) - I suspect that later will help with artifacts introduced if spectrum is not sampled aligned to sensor pixel matrix (but at certain angle) - some dithering would help here between exposures. Then there is matter of center line vs offset spectrum extraction - in Gaussian PSF blurred data - center line is most blurred, while offset will have lower SNR due to less signal getting there.
  21. @Merlin66 Since above clearly shows that spectrum will be seeing limited, have you ever tried some sort of frequency restoration method to "sharpen" seeing limited spectrum? Deconvolution or maybe simple wavelet processing? I don't see a reason why it would not work if we assume gaussian PSF for long exposure subs, and one stack multiple frames to get good SNR of final result.
  22. Oh, new version of that spread sheet, thanks for that. I was using V2.1 to do calculations for different configurations with SA200. This one has objective grating section as well. According to spreadsheet, with 6 lines/mm and 200mm aperture I'll be seeing limited in resolution. Total of 1200 lines would give me theoretical resolution of R1200, but star size for 2" FWHM would limit me to R384 if I use native F/8. Dispersion would be 3.9 A/pixel - which is quite enough to record 15.6A resolution (even binned to boost SNR). I'm going to see what difference makes using reducer, not sure if it will make any since it is objective grating - star size will be smaller, but so would be sampling resolution as well. Yes, not much difference, so no point in using focal reducer (except for boosting SNR of spectrum), but at a small cost of spectrum resolution - R343 with reducer. I have found a paper that describes using just the approach I came up with - printing with laser printer on overhead projector clear film. Author also concluded that 6 lines per mm is achievable with 600dpi laser printers. Here is reference: http://aapt.scitation.org/doi/abs/10.1119/1.2768688 and also from abstract: "A standard laser printer can print black lines (separated by a white line) at 60 black lines/cm (about 150 lines/in), which is a small enough spacing to produce a crude diffraction grating [see Fig. 1(a)] that is sufficient for the physics inquiry activities described in this paper" (not sure what that paper is about, I just read abstract and found confirmation that printed grating will work). So it looks like this could indeed be viable solution for ultra cheap way into spectroscopy. I just ran numbers for smaller scope - something like 80mm F/6, and results are still good - still seeing limited - R227 for 2" seeing, again using same 6 lines per mm grating. Probably only drawback of this method is efficiency - grating will not be blazed, so light will be equally spread on both sides of source.
  23. Never ending cloud cover is to blame ... So here I was again thinking about different ways to utilize Star Analyzer and possibility of adding beam collimation and slit to the mix, when it struck me - why not full aperture grating? SA can be used in front of the lens - this way it operates in collimated beam, but aperture is really small - 1.25" or there about. Could one easily build diffraction grating for let's say 200mm aperture and what resolution would one hope to achieve. Fiddling around with SA resolution calculation spreadsheet I came up with roughly 17A resolution - mostly due to coma. Setup would be SA200 + 50mm spacing with large ASI1600 sensor on F/8 8" RC. This translates into ~R300 around H alpha line. So with SA200 values around R100-R300 are possible. What can we achieve with full aperture DIY cheap solution then? Idea is to create really low number of lines per mm full aperture grating, and here is what I came up with: Overhead projector transparent sheets with laser printed grating on it. Can't do cheaper than that. Let's do some math. Laser printers are capable of 600dpi, so we need to figure out what number of lines per mm can be printed with that. That translates into single point being 0.042333.... mm per printed point. We can't assume that line of single point with will be printed fine, but we can do lines let say 4 points wide? That would get us 6 lines per mm - does not sound like much. But, 200mm with 6 lines per mm will give us 1200 lines - about the same resolution as SA200 with 6mm converging beam, and that is higher resolution than will be achieved with SA200 due to coma and seeing. So resolution wise it could work. But what about diffraction angles and spectrum size on sensor? Simple formula gives angle of ~0.0054 radians for 900nm (max that I would record), and that translates into ~1114 arcsecs. With resolution of 0.5 arcsec/pixel - spectrum will be spread to over half of the sensor (sensor has 4600px in width) - but will fit nicely on sensor. So if I'm not missing anything crucial, this indeed might work for my setup?
  24. Can be done with flat darks only, no need for bias files. To be sure - do simple experiment. Take set of flat darks (just basically darks at short exposure - same that you use to get your flats). Then power off everything and disconnect camera. Power on again, use same settings, and do another set of flat darks. Stack each group using simple average stacking to get two "masters". Subtract second master from first and examine the result. Result should have average 0 and have only random noise in it - no pattern present. If this is so, you can simply use following calibration: master dark = avg(darks) master flat dark = avg(flat darks) master flat = avg(flats - master flat dark) calibrated light = (light - master dark) / master flat Note that lights and darks need to be taken on same temperature and settings, and flats and flat darks on their own temp, exposure and settings (which ever suit you to get good flat field). You can on the other hand check if bias is behaving properly by following: Take two sets of bias subs, and do the same as above for flat darks (two stack, subtract and examine). If you get avg 0 and no pattern - that is good. Now to be sure if bias work ok, you need to do the following as well. Take one set of darks of certain exposure (let's say 10s), and take one set of darks with double that exposure (so 20s, same temp, same settings). Prepare first master as avg(darks from set1) - bias. Prepare second master as avg(darks from set2) - bias. Then create using pixel math following image: master2 - master1*2 and examine it. It should also have avg 0 and no patterns visible. If you get that result then bias functions properly and you can use it (although for standard calibration it will not be necessary as you can use calibration mentioned above).
  25. I would not go with Gain 0 - too much quantization noise. Use Gain 139 preferably or Gain 79 if you want more dynamic range. Scale your exposure length so you don't get too much saturation. Even if you saturate bright stars, there is a way around it - just shoot some short exposures to recover signal in brightest areas. I don't think that dark current shot noise is really a problem. Look at this graph for ASI1600 At -20C dark current is 3.72e at 10 minute exposure, so associated shot noise would be 1.92e or order of read noise. That is low dark current noise in 10 minute sub. From the graph we can see that doubling temperature is around 7C for temperatures bigger than 0C, and even a bit bigger for lower temperatures. So even if amp glow is +7C warmer (and I would be surprised that there is such temp differential across the surface of the chip without causing mayhem), dark current in amp glow area will still be less than x2 dark current elsewhere and associated noise would be less than x1.41 - again on 10 minute sub, not something to worry too much about. For amp glow area to be x4 higher in dark current - thus having x2 more noise than elsewhere, that part of sensor would need to be almost 15C hotter than the rest! If peak temperature in amp glow area is not dependent on surrounding temperature, but on set point cooling only, then it calibration should work each time, but yes one would have to prepare master dark for each exposure length.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.