Jump to content

vlaiv

Members
  • Posts

    13,263
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. If you can see it in the eyepiece - and it looks that way from your images - it must be at the field lens near the field stop as it is in focus when you look at it. You can probably remove barrel from the eyepiece (most of my eyepieces allow barrel to be unscrewed) - so you can easily access field lens and examine it. Maybe all you need to do is to wipe it clean - but be careful of wiping optical surfaces - don't use any force and maybe just use blow bulb before to remove anything that is loose - than use cotton q-tip with alcohol or other lens cleaning liquid and try to very gently remove that residue (maybe it is just lint and maybe it is dried residue on field lens).
  2. In case of RASA8 and ASI2600 - it does not. Telescope itself has spot diagram that matches better with ~6.7µm pixel size than with 3.76µm. In case of RASA8 - you can't get better resolution due to optical compromise made by designers - they wanted to make fast, wide field system that is corrected up to 30mm diameter and they ended up with spot diagram that is quite large for telescope of 8" of aperture. RASA8 is not diffraction limited optics. RMS spot size upper limit over the field of telescope is about 4.5µm. That is RMS. Relationship with RMS (sigma) and FWHM is x2.355 so FWHM of RASA8 without any seeing influence or tracking errors is ~10.7µm. Other two options that I listed will certainly be capable of better resolution. 90mm refractors will certainly easily provide with 2"/px in regular conditions and 130mm one will go at 1.5"/px. RASA8 from above is capable of about 3.4"/px at best - but again, that is OK for telescope intended use. It is wide field low resolution instrument that is optically fast.
  3. I have to say that RASA8 is simply most cost effective solution to a particular problem. Despite the fact that it is very fast optically at F/2 - it is not only solution at that speed (in terms of time to target SNR) - nor the fastest one for that matter - but it is far the cheapest solution. In this case, "problem" can be stated as - "Give me fastest imaging rig working at ~3.4"/px with FOV that is 3.36° x 2.25°". RASA8 is not the "fastest" solution, maybe also optically not the best, but paired with ASI2600 - it is the cheapest. Other two solutions that I've found: Solution No. 1: Quad setup consisting of: 4 x https://www.teleskop-express.de/shop/product_info.php/info/p12268_TS-Optics-94EDPH---7-Element-Flatfield-Apo-with-94-mm-Aperture-f-4-4.html + 4 x ASI2600 It might even be somewhat faster then RASA8. If we account for two mirrors at 97% reflectivity and 46% CO - clear aperture of RASA8 is equivalent to 174.5mm clear aperture (or 4 x 87.3mm) 7 element APO will have 14 glass to air interfaces and if we give each 99.5% transmission (very decent coatings) that is about 93.2% transmission of light in total. That reduces effective aperture from 94mm to 90.7mm of clear aperture. I'd call that effective tie. Solution No. 2: This one will be slightly faster (and will cost many times more ) This time dual setup consisting of: 2 x https://www.teleskop-express.de/shop/product_info.php/info/p11282_APM-LZOS-APO-Refractor-130-780-mm-with-3-7--APM-Focuser.html + 2 x Riccardi x0.75 reducer + 2 x ASI6200MC ASI6200MC has x1.5 larger diagonal than ASI2600 so it needs to be at 600mm of FL to cover the same FOV 780mm x 0.75 = 585mm Pixels are the same size - so we can bin x3 instead of x2 and get same pixel scale. But this time we have 2 x 130mm which is roughly equivalent of 2 x 126mm of clear aperture (this time 6 elements - or 12 glass/air interfaces) or 178.4mm So we have 174.5mm vs 181.4mm vs 178.4mm of respective aperture all working at ~3.4"/px, and people think RASA is fast (think of quad setup of those LZOS apos instead of dual one - now that is speed!) Fun, right?
  4. Ok, so to simplify things - when doing DSO imaging - you should be around FWHM / 1.6 for your sampling - meaning if you measure FWHM of 3.2" then you should be at 2"/px as 3.2 / 1.6 = 2. This is only approximation but very good one. This is not the same as planetary imaging where you adjust your F/ratio based on frequency that you work with (say about 520nm for color images or 656nm for Ha - or 850nm for example for IR - depends on what filter you are using) and pixel size of your camera and there is exact "solution" based on Nyquist without any approximations.
  5. It has to do with "diagonal" in polar coordinates. Look at these two triangles: In first we have small angle (left vertex) and hypotenuse is almost as long as bottom edge. In right triangle - we have large angle and this time hypotenuse is much longer than bottom edge. Ra revolves always at the same rate - but it traces larger or smaller arc on the celestial sphere. Try to picture small FOV of camera and circle that is traced at DEC 0 - you'll probably end up with masses of FOVs stacked next to each other like this: (I draw arrow because I got tired of drawing little rectangles). Now look what happens at North celestial pole: "Whole" circle can fit in single FOV. Both circles are traced out by RA of mount in 24h - but "distance" traveled for each in terms of number of FOVs is different. At DEC 90° - only one FOV is traced out in 24h but at DEC 0° - many FOVs are traced out. This means that "speed" in pixels is different as each FOV contains the same number of pixels. Error of the mount is difference of two speeds - sidereal and speed the mount tracks. If speed in pixels depends on DEC - so does difference of speeds. Although mount error in arc seconds remains the same - it is much less expressed in pixels. For this reason RA error graph "calms" down when you are high in DEC - but also - your stars get tighter in RA as well for same effect - as stars are projected onto pixels. That is also the reason why we should calibrate our guider at DEC 0° - because that way we have highest precision of calibration as we have best ratio of error - to pixel.
  6. Nyquist sampling theorem is very precise in what it says - it does not say anything between two values being optimal. It says that one should sample band limited signal at twice the highest frequency component. In order to apply Nquist sampling theorem - we need to have certain conditions - namely band limited signal. In general case, when telescopes are concerned, this condition is strictly met only for planetary imaging. Aperture of telescope and its effects on diffraction of light create image in focal plane that is band limited. Here is this visually represented (or rather in graphs): Left image is Airy pattern cross section and right image is MTF. Left is effect of aperture on single point of light and right is representation of that filter in frequency domain. In right image we can see that graph hits zero at one point (most right point) - that is our cut off frequency and it shows that we have band limited signal - image produced by telescope aperture does not contain higher frequencies that that and hence - Nyquist sampling theorem can be fully applied to that. If we are talking about deep sky imaging - we no longer have above condition and we have to make number of assumptions (which are very reasonable assumptions). I'll first show you equivalent graphical representation and then talk about approximation and numbers involved. Here I don't even need to create composite image - I found one online because it is well known relationship. In long exposure imaging point spread function of telescope looks like Gaussian curve - and that is our first approximation. If we consider central theorem - it is very sound approximation, and in fact - you can do fitting of Gaussian curve on star profile and it fits nicely most of the time (that is how we get FWHM, eccentricity and so on when we measure what our stars look like, it is also used to find peak for alignment of images and so on ...). By the way - Central limit theorem says that sum of many random variables tends to normal / Gaussian distribution - and that is what long exposure makes - integration / sum of all seeing disturbances and mount tracking errors. Back to Nquist - Gaussian star profile means that Fourier transform of it is also Gaussian in shape - that sort of poses problem for Nquist. That means that we don't have cut off frequency. Gaussian goes on forever and never reaches zero. It gets smaller and smaller - but never reaches zero - not even at "infinity" (it does tend to zero as x tends to infinity). First approximation was Gaussian shape as PSF in our images. Second approximation is to say that Gaussian curve falls low enough after a while so it does not make any impact on final image. We can say - after some frequency - all higher frequencies are simply too small to make significant contribution to image. This is where we approximate things for second time and we have choice of this number limit. I've chosen this number rather arbitrarily - but not quite so. I've chosen it so that you can still sharpen your data - but only "sensible" data - one with SNR > 5. If you choose this limit in such way and you do the math and connect that to FWHM of our Gaussian profile - you get simple relationship that sample rate needs to be FWHM of star profile / 1.6. It takes all frequencies that are still at least 10% or original frequency (or was it 1% - I can't remember now, but I can crunch the numbers again to see which one was it at the time I derived this). I then did simulations to confirm that my estimations and approximations make sense - and they do. That is how I derived factor of 1.6 with respect to FWHM. If you wish - we can do the math step by step so you can see the actual derivation?
  7. Yes, I understand, it is quite normal to follow a tutorial for some time at start and later you'll develop workflow that you prefer. I personally base my processing on data and math rather than prescribed set of actions and I don't want to confuse you at this stage with that sort of thing. I can give you some useful advice though. Try to perform proper calibration of your data - it yields the best results. Do actual flat frames rather than relying on synthetic flats (that won't help in this case since you already captured the data - but for future). Do darks as well. General workflow that I would recommend as beginner workflow would be: 1. wipe background. Unfortunately I can't recommend any particular tool for this purpose - I use plugin for ImageJ that I wrote for this purpose. I think that background should be wiped at linear stage 2. Do any color corrections you need next. 3. Color compose and do simple three step levels stretch. First step is to move right slider left until you are about to saturate brightest parts of the image. In this case - you would look at core of M31 and stop while it is still point like without blowing it. Apply levels. Next round is to move middle slider to the left until you show faint bits properly. This will expose background quite a bit - but don't worry about it now. Apply again. Last step is to move left slider at the foot of histogram without clipping it. This will return background to normal. 4. Do selective noise suppression. Here is post I made some time ago that demonstrates some of the steps in processing:
  8. If your core is not blown in stack - then there is no need to do layers / masking. Simple stretch can preserve core and reveal faint structure. Yes, background is now showing nice histogram in each channel - only tiny issue - it is too red. Red channel has histogram peak at 20 while blue and green at 17. Did you use some sort of synthetic flat fielding to correct for vignetting? It looks like red channel has some sort of "dip" around galaxy?
  9. Color filters won't work on DSOs so don't waste your money on that. Only filter that stand a chance of making significant difference is UHC type filter (OIII as well - but is more expensive and more "specialist" filter - usually requiring larger aperture). However, it is only useful on emission nebulae type targets - like Ha regions and planetary nebulae. Galaxies and clusters won't benefit from it.
  10. Slightly blown core - maybe it was over exposed in 2 minute image? If not - try to tame it a bit. Background is too dark and clipping. Try to avoid histogram clipping to the left when processing the image. Otherwise very good start.
  11. What mount are you using? Sampling rate is 1.1"/px - both from 714mm of FL and when "platesolved" from image. That is not very high sampling rate, however - your stars in the image are huge for some reason. Maybe there was issue with focusing or perhaps seeing was particularly bad on given night. In any case - I'd recommend that you bin your images x2 in software (2.2"/px is much better resolution for 100mm of aperture) - and this one, maybe even x3.
  12. Calibrating with proper darks will make a difference, also, from the file title - it looks like you had you gain set at 0? Set it at least to unity when doing NB images - that will significantly reduce read noise (from 3.4e down to 1.7e - halving it). You also seem to use long focal length scope? You are oversampling by quite a bit - binning your data will again improve SNR, and yes, single sub will be noisy, stacking them improves SNR again.
  13. I think it would be far easier (and possibly cheaper) to get F/4 newtonian of same aperture than to try to use barlow on RASA. You can pair F/4 or F/5 newtonian with reducing coma corrector to get ~F/3 scope (like PowerNewtonians and Boren Simon astrographs that are at F/2.8 or F/3.2).
  14. In any case, maybe you should calibrate your data, there are a lot of hot pixels, so at least darks. This is what I was able to pull out (not very versed at processing wide field shots): All those red dots are hot pixels. Blue halos around stars are chromatic aberration of lens used. There was very strong gradient in the image from light pollution that I needed to remove, so that was done in ImageJ along with x3 bin to improve SNR (but image is now smaller - only ~1500x1000 px because of that). After that I composed back channels in Gimp and did very mild curves adjustment and a bit of denoising. Maybe I could get better image if you post linear 32bit data, but don't expect great improvement. I think this is good as is.
  15. Did you already stretch this image or is the data still linear? It looks as if it has been stretched.
  16. I'm also sure that is the case, however - they can present uneven "profile thickness" to incoming light if they are rotated a bit:
  17. You did not attach the actual image but rather pp3 file (whatever that is). Luckily google knows what pp3 is
  18. Well, if you trade resolution for SNR and really push data - you can get it to stretch further, but I don't think I personally prefer to see data stretched beyond what is really capable of showing:
  19. I loaded linear fits file that you provided in ImageJ and I ran my own plugin that works similar to gradient xterminator and similar tools for background extraction. I bin image x2 to gain some of SNR (it's a bit over sampled so no detail was lost). Then I loaded image to Gimp and did very basic stretch. I did not color manage it at all, so colors are questionable. In the end I did very basic selective noise suppression.
  20. I think it is stretched good. Maybe handling of background could be slightly better. There is gradient in the image in background - it can be removed. That produces a bit more contrast on the galaxy - like this:
  21. This can happen for two reasons: 1. one spider vane being longer than the other (here different vanes means different direction, so as "vertical" one would be considered both sides of secondary for example). 2. one spider vane being thicker than other - it does not need to be mechanically thicker - it can be optically thicker - like slanted with respect to incoming light
  22. These are just debayering modes I presume. Binning 2x2 is actually Super pixel mode - it creates one RGB pixel out of RGGB quad by using R and B as is and averaging two G pixels. This reduces resolution (pixel count) of the image by x2 in each direction. Bilinear is just simple average of pixels of corresponding color to produce missing values. It takes R from one RGGB quad and R from adjacent RGGB quad and creates average of those two to be placed between two R values. Other components are handled similarly. Gradients approach (whether 2, 4 or VNG - variable number of gradients), analyze image for gradients obviously and based on that interpolate missing values. It is a bit higher order method of interpolation than simple bilinear. In any case - options 2, 3 and 4 give "full" resolution in term of pixels by interpolating (fancy word for educated guess ) missing colors while first one is closer to actual sampling rate of color sensor. By looking at images - I'd say that 3.76µm pixel size is too small for RASA8 and that you'd be better by binning your image x2 in the end if you use any of 2, 3 or 4 debayering technique. I think that spot diagram for RASA8 goes in line with this as it is quoted as being 4.6µm RMS or less over imaging circle (which is equivalent of 10.8µm FWHM with sampling being best with 6.77µm pixel size).
  23. There won't be as much difference in views between 130pds and 150pds as you might think. 130pds will be almost as good for planets as 150pds. Couple of weeks ago, I observed Jupiter and Saturn with 100mm scopes (F/10 refractor and 102mm F/13 Maksutov) and I really liked what I saw. There was plenty of detail to be seen. 130pds has 30% more resolving power than these two scopes and under right conditions it will show you plenty on planets. On the other hand, EQ 35 will carry 150PS. Main problem will be the wind. If you have any ways to shield it from wind, then it just might work, especially if you are guiding. So to answer - I don't think it will be such frustration, but neither the difference between the two will be great so that you must go for 150pds for visual (not really sure I've helped you with this ).
  24. Just a quick question if you don't mind. ASI2600 is 6284x4176 pixel camera, and yet this image is two panel composite that is only ~1300x1000 px in size. Any particular reason you've chosen to present it reduced in size, and how did you and at what stage perform reduction?
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.