Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

vlaiv

Members
  • Posts

    13,035
  • Joined

  • Last visited

  • Days Won

    11

Everything posted by vlaiv

  1. It should be fairly easy to do. Just have your subs calibrated and saved in single folder in 32bit fits format. Download ImageJ and put it somewhere on your file system (it does not install, you can run it from that folder). I can either provide you with source code, or compiled plugin. Later is simpler as you need to just copy it to plugins folder inside ImageJ folder (it is already there). You need to restart ImageJ if it were open for it to recognize new plugin. After that it is only matter of: 1. File / import / image sequence here you select first image in your folder, it should pick up on all others (there are options to "filter" what subs you want to load). This will open "stack" in ImageJ 2. Next you open Plugin / Sift bin V2 and select following settings: It should create another stack of images that contains x4 as many subs and has twice smaller height and width. 3. Save as / image sequence will let you save those subs choose fits format, give it a name, and number of digits (they will be for example labeled name001, name002 ....). That is it. Plugin is attached : Sift_Bin_V2.class
  2. I can send you source of ImageJ plugin that will do it if you want? Not sure if you are familiar with that software package - it is free, written in Java so works on various operating systems .... At 1.16"/px, bin x2 will produce resolution of 2.32"/px, and that is just a bit lower sampling rate than is needed for 3.14" FWHM stars, which should be around ~1.95"/px - so not much loss but there will be a bit of increase in FWHM due to this. Rest is down to pixel blur. What resampling method are you using when registering your subs? I think there is an option in PI to select Lanczos resampling (maybe it is even default) - you should use that one for sub registration after binning.
  3. What was original sampling rate? Btw, can you try "split" bin to see if you retain some of the sharpness and get the same SNR improvement? I'm not sure that there is option in PI to do split bin, but I've seen a script somewhere that does it (it is not actually designed for that, but for splitting bayer matrix into 4 color subs - R, 2xG and B, if you run it on mono sub, you will end up with 4 subs of x2 coarser sampling rate but same pixel size - you increase number of stacked subs by factor of x4 and that leads to overall improvement in SNR by factor of x2 - same as binning).
  4. I think that AZ-GTI is more versatile as it can: 1. Operate in alt-az mode for observing 2. Has guiding in both axis Depends really on focal lengths that you plan to use for AP. I don't know which one is more precise though in tracking and guiding. If you want really wide field - maybe az-eq avant + tracking motor is enough.
  5. I don't think above refers to usefulness of on chip binning with CMOS in terms of SNR gain. It refers to the fact that doing it on chip provides only one benefit - smaller files. There are several benefits when doing it in software - look at my post above. Both on chip and in software will provide same SNR gain and will be equal if "standard" binning is used.
  6. Indeed, no benefit in doing it on chip, except for file size / download time. Doing it in software leaves much more possibilities for data manipulation - like hot pixel exclusion, optimized binning based on SNR (examining master dark for statistics will tell you which pixels contain how much noise - there are more noisy and less noisy pixels and you can bin adaptively). You can also do "split" bin instead of regular (better option for preserving sharpness) and of course fractional binning.
  7. That cable does look like something that will do the job - its made of just the right materials to give it flexibility.
  8. For that you want Cat-5e cable, UTP variety, stranded. Cat-6 is usually shielded and more tightly wound - which makes it firmer. With really flexible cable, you might only reach 100mbps speeds, even if your network gear is gigabit. This is because all the "tech" used to enable high speeds will make cable stiffer - like more tightly wound pairs, shielding between twisted pairs and full shielding (prevents interference / crosstalk and such - thus enabling higher speeds).
  9. I was just about to ask that, but yes, good point, with angled beam, flat taken without SA will correct only zero order dust shadows - maybe handy for background subtraction, although background should be uniform so it's just an offset and not pattern.
  10. I can offer alternative explanations: 1. Over time, one tends to gather more data, so both SNR and processing skills increase. Microlensing artifact is in fact light signal, so it will be more obvious if you have better SNR data or you improved your processing to push the data further. Don't really think it will worsen on its own - it would mean that composition of coatings is changing or maybe coatings are getting thinner or something. 2. If they are in fact artifacts of micro lens, then resolution should not have impact on it, or at least it should have opposite effect of what you are describing. Star will be tighter on lower resolution, so light will be more concentrated, and that might lead to stronger reflection / diffraction issues, hence stronger micro lens artifact. There is another explanation that can explain difference between two scopes - speed of light beam. I think that there is strong dependence on both wavelength of light and speed of light beam for particular setup. One setup with for example F/5 beam could have issues in Ha and no issues in OIII, while other setup with F/7 beam could have quite opposite - Ha without issues and OIII with issues for example.
  11. I'm sort of worried by pixel to pixel differences of my sensor (ASI1600), as I've noticed particular pattern - checkerboard pattern is present for some reason. QE is not varied greatly (1-2%) and with regular imaging flat calibration works well, but like you said - here it can be a bit of a problem. I'll try different approaches to see how much difference I get in spectrum. Here is "zoomed in" section of my regular flat for this sensor (note it is in fact mono sensor and what you are seeing is not bayer pattern): Pixel to pixel variation is real and not noise - I used x256 flat and flat dark subs to create it, and each sub has SNR of about 50 or so, stack has SNR of about 850-900. Variations are, on other hand, few percent (about order of magnitude larger than any residual noise). Here is "profile" of the flat (flat has been scaled to relative QE): I think there are at least couple of "wavelengths" present here - one very fine on pixel level, and one maybe 4-5 pixels in wavelength. In any way, I think it will result in strange pattern in spectrum even if I over sample (and I will by factor of at least x2-x3 in order to get good dispersion and minimize seeing effects, I can bin data later to improve SNR back a bit). I will certainly take flats, and I'll make sure that I place zero order star at roughly the same position for each measurement (SGP does have crosshair in preview so I can use that). After that I can try both with and without flat calibration to see which one works better.
  12. While we are on this topic, hope no one minds me asking another question related to spectrum calibration. How do I do flat calibration? Same as with any imaging - just place flat panel on scope with the same gear arrangement (focus included) and I take subs? I divide spectrum image after dark calibration as normal? I'm asking because there is issue of vignetting - spectrum can "overlap" parts of "zero order" of vignetting and flats in this case will maybe produce wrong results? Or maybe it does not matter at all, and we flat calibrate to eliminate pixel to pixel QE variations and dust on sensor / any filters after SA? I assume that it might not matter because any vignetting will be incorporated in system response, but that means that recording needs to be done with zero order image at exact same spot each time (for both reference star and one that we are measuring)? It also means that spectrum orientation should remain the same with regards to sensor (but not necessarily to star field - as we can rotate whole assembly to get spectrum clear of background artifacts) - that is probably better as we want spectrum to be horizontal anyway, I guess. Anyone tried background removal procedure with "clear exposure"? I understand that it can be tricky as SA needs to be removed from optical train which can lead to slight focus change, and one needs to do linear fit on zero order stars to match intensity. I guess one can monitor FWHM / HFR to get roughly the same defocus of zero order image.
  13. Yes, mono is better option. You will still need to do camera response calibration but it will be smoother without sudden dips.
  14. I did not say that - I've mentioned that it is in fact camera response (transition from green to red) and that there is another one like that, but less pronounced - transition from blue to green.
  15. Not sure why you are saying it is misleading? You are in fact correct than S/N per surface area is not changed, but point of binning is to trade "pixel surface area" for S/N improvement - one gets coarser sampling rate by doing this. I did point out that I rescaled unbinned shot to make creating composite easier and to make it easier for people to see the real improvement in S/N. I also made it clear how I did it - by in fact "throwing away" 3/4 of the data, but in doing so I did downsample without affecting "per pixel" SNR of original image, I did not introduce pixel-to-pixel correlation, nor did I affect sharpness of the image apart from any removal of high frequency signal due to using coarser sampling (which in fact does not happen in this image since image is oversampled as is).
  16. Since you are using OSC camera, you will have two major "dips" in camera response - R-G transition and G-B transition. The one you marked is R-G transition. Here is "generic" response curve for that sensor: So first dip B-G is at about 480nm, and second is at about 580nm In your spectrum, you can also see first dip but it is not as pronounced: In any case, way you create instrument response (both sensor and optics if coatings don't have uniform frequency response, and often they don't) is to take your spectrum and divide it with reference spectrum of that star. This means that you should record the spectrum of a known star - one that you have reference spectrum of. You will also need to "smooth" result or "smooth" reference spectrum if it is more detailed than your spectrum, otherwise features of reference spectrum not recorded in your spectrum will be part of instrument response (but they are not). Here is video tutorial for RSpec that describes that process (RSpec is used to make the video - but you can figure out important steps and reproduce those in software of your choice): https://www.rspec-astro.com/videos/InstrumentResponse/InstrumentResponse.mp4
  17. Actually you can if you are not careful. OAG prism contains straight edges, and if you push prism too far into light path so that it casts a shadow on sensor, which can be corrected with flats - it will cause single spike perpendicular to OAG edge that is causing (not X type like in reflectors with spider support). Here is an example of it: Or maybe this one shows it better: or this one I was not careful enough, so OAG prism did shadow a bit of sensor, and only stars in that part of frame were affected (and only bright ones). Luckily this can easily be sorted out by moving prism out of the light path that hits main sensor.
  18. Yes, after calibration and before stacking. I don't use PI, so can't tell if there is option for batch, but I don't see why it would not work.
  19. No. Don't think in terms of increasing signal and increasing noise. Think in terms of ratio of the two. Depending on the type of stacking / binning you are using, signal can increase or stay the same. If you are adding things - signal will increase. If you are averaging things - it will stay the same. Regardless of what method you choose, SNR improvement is the same. Read noise is not only source of noise, and if you properly expose your subs - it should have smallest impact (or rather - your sub duration should be such that read noise is no longer dominant source of noise). Therefore binning improves SNR regardless if effective read noise is increased in case of software binning. In fact - you should not look at it that way, with software binning - read noise "remains the same", but hardware binning is slightly reducing read noise. Both however improve signal to noise ratio. Depending what software are you using to process your data, software binning (true binning) can have different "name". For example if you use PixInsight - software binning is called "Integer Resample". You should use integer resample with average method for binning your data. Here is another "synthetic" example: This montage is composed out of 4 panels - top right is original panel with gaussian noise with standard deviation of 1. Other panels are produced by: 1. binning 2x2 - average method 2. Simple bicubic interpolation 3. Cubic-O-MOMS interpolation Here are results of measurement of noise in each sub: First is reference sub, and it has StdDev of 1 (noise with value 1). Next one is binning 2x2 - it results in very predictable improvement of x2 (noise is less by a half or - 0.499....). Simple bicubic interpolation reduces noise even further to 0.4, or by factor of about x2.5, but in doing so creates correlation between pixels and slightly blurs the image. Cubic O-Moms is last, has the least SNR improvement by factor of x1.25, but should be the "sharpest" method. In this example I did not examine impact on sharpness on the image - just on the noise. However, you can clearly see couple of important points: 1. Binning produces exact reduction in noise by factor of two (if signal remains the same - average method) 2. Quality resampling method like Cubic O-MOMS has less of SNR improvement (in this case factor of x1.25) 3. Every downsampling method does improve SNR to some extent, and some even more than binning - at expense of resolution (not seen in this example, but that is what happens).
  20. Ok, here is sort of as simple as it can get explanation of binning, and related CMOS vs CCD thing. - Binning increases SNR in the same way stacking does (or rather very similarly, there are small differences not important to this discussion). When you stack 4 images - you are adding / averaging pixel values between those images. Binning is adding / averaging 4 pixel values in 2x2 matrix - in principle same thing. So binning is always increasing SNR of recorded data predictably - bin 2x2 you increase SNR by factor of 2 (same as stacking 4 subs). Bin 3x3 - you increase SNR by 3 (same as stacking 9 subs, as you are adding/averaging 9 pixels) - Difference between hardware binning (CCD) and software binning (CMOS) is what recorded data is being binned. With CMOS you are binning completely read out data, while with CCD you are binning electrons prior to them being read out. There is subtle difference between the two - CMOS binning is after each value had read noise added to it, while CCD binning is done prior to analog-digital conversion and read noise is added to binned value. Difference between the two is - hardware binned pixel (or group of 2x2 pixels) has regular level of read noise - same as single pixel. Software binned pixel has twice higher read noise than single pixel. In another words - if CCD camera has read noise of 5e, when you bin 2x2 and get sub with less pixels (and higher SNR) it will also have read noise of 5e. On the other hand if CMOS sensor has read noise of let's say 1.7e (like ASI1600), after you bin it - it will behave as camera with larger pixels but each pixel having 3.4e read noise. That is the only difference between software and hardware binning (if we bin in "regular" way, software binning can have certain advantage related to pixel blur, but that is "advanced" topic). - as for resampling, binning is a sort of resampling, but it is not only way to resample image. There are couple of things that resampling or rather downsampling does to your data: it creates less pixels covering same FOV - hence it reduces sampling rate, it changes SNR of your data and has some effect on pixel blur and possibly introduces correlation between pixels (last two have small impact on sharpness of the image). Different resampling methods give different results in all of these except producing same sampling rate. Binning is predictable - it creates larger pixel so SNR increase is known, and it adds pixel blur. It does not add correlation between pixel values. Other forms of resampling can add less of pixel blur so image will be slightly sharper, but SNR increase is less (sometimes even more, at expense of noise statistics because of correlation) and there will be correlation between pixels (that has to do with noise statistics of the image). I regularly bin my data in software when I record it with 8" RC (1624mm FL) and ASI1600. Native sampling rate is around 0.5"/px and that is of course way too much. Sometimes I bin it 2x2 and sometimes 3x3 - depending on conditions of the session. Here is an example of binned image vs unbinned image that clearly shows improvement in SNR: This is single 60s uncalibrated sub of M51 taken with above gear. I created a copy of that sub, and with first copy I did software bin 2x2, while with other one I did "downsample" in particular way - one that really does nothing to the image except change sampling rate - so no correlation between pixels, no pixel blur and no SNR change (I simply took every other pixel in X and Y and formed image out of those - no addition, no interpolation, just x2 lower sampling rate to match the scale of binned image). Then I pasted part of second copy over first copy and did basic linear stretch to show result in such way that can be easily compared. One simply cannot not notice improvement in SNR (it is x2 improvement in SNR). Btw, above image is 50% scaled down, but here is the same image at 1:1 zoom: (notice that scaling down for display, which also uses downsampling, improves SNR a bit, so scaled down version above does look smoother than this one at 1:1, but in order to really compare results - we need to measure resulting noise in different approaches to determine how each one behaves - I can do that for you as well if you want - make composite image of 4 different approaches - bin x2, above splitting of pixels, simple resample, and advanced resample technique and do some measurements of noise - as well as star FWHM so you can see the impact on resolution).
  21. It is interesting - not sure what the light intensity output is, but it does get rather expensive when the size goes up.
  22. Don't know - I'm getting error on that ebay page ...
  23. I think that simplest DIY flat box consists of following: 1. Acrylic light diffuser panel Do google search, these should be fairly cheap and you can get them cut to size. 2. LED strip 3. Some sort of box to put it in. 4. optionally you can get PWM dimmer for led strip
  24. I believe it indeed was the case. I took that quite a long time ago, and realized that one should focus on spectrum some time after that (if I recall correctly), so that is a good chance I was not aware when I took it.
  25. No specific project in mind. Actually there is - more of a challenge then a project. Most of things to date were just theoretical, and recent discussion fueled my interest in actually going out and capturing some spectra. I've not used my SA200 much so far. One failed capture of spectrum (at the time I had no idea how to use it properly) - very poorly focused, I'll attach profile of capture without any processing - just to show that almost no features are visible. Did observe spectra of stars on couple of occasions. I want to try to do star classification for purpose of determining approximate distance to stars (photometry and stellar class), but for the time being will settle just at practicing and capturing some decent spectra. Given the time it takes to setup everything, I want to make the most out of the session, instead of just capturing single spectrum (and reference for calibration) - that means a list of techniques to try out, and list of stars. Next will obviously be to see what sort of resolution can I realistically reach, by using different approaches - might even have a go with different scope - F/10 achromat stopped down a bit - it will have less light grasp, and focusing will be even more tricky due to different focus positions across the spectrum, but multiple focus positions should take care of that as well. It does offer very good theoretical resolution - over R600 (if we exclude effects of field curvature by means of stepped focusing). All of this is really planning stage for some time under stars (which have been scarce lately, and not only due to poor weather, most of the time I can't be bothered to setup everything for some reason, so hopefully this fun activity will spark interest again). Here is "failed" spectrum, I don't even remember gear I used to capture it, but I do know it was color camera. I did max stack of channels to produce mono image, but transitions between colors can be clearly seen in outline: Can't really say I'm seeing any features in it
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.