Jump to content

vlaiv

Members
  • Posts

    13,263
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. At any given time there is about 10k aircraft in the sky above us. Satellites are smaller and further away - density per given area of the sky is much smaller as far as launches go. We notice StarLink satellites because they fly in groups for the moment (not sure if they will fly like that indefinitely - guess not once every reaches designated orbital position). That represents much greater psychological impact than airplanes that we are used to. I had subs ruined by passing airplane much more regularly than by satellites by the way. They have flashing lights and leave distinct signature in subs.
  2. @badhex That test that I mentioned is really just "eyeballing it". There is much better test done with actual eyepiece in place. You need just torch / flashlight that gives concentrated/narrow beam of light (it does not have to be very narrow). You put eyepiece in scope and focus at infinity. Then point the scope at white wall and shine torch at eyepiece (along optical axis of eyepiece) from some distance away - like one meter or so. Just make sure everything is perpendicular / in line (scope perpendicular to wall and beam in line with eyepiece) Look at the wall and measure resulting circle diameter - as per diagram: Also be careful - if you place torch too close to eyepiece - you'll get larger image on the wall as you start introducing light at angles. Point is that you hold torch far enough so that any angles are rather small (and they get further reduced by magnification of scope/eyepiece combo). Of course, you'll probably need to dim / turn off the lights in the room to be able to see the circle on the wall - it will be very faint.
  3. Nope, never seen that one Closest type of cartoon that I watched would be this: Same production - only aired just after Deputy Dawg. Interestingly enough - wiki says following: "Originally, according to two articles from 1959, the character of Deputy Dog was supposed to be part of a series called Possible Possum, starring the titular character (who later became Musky Muskrat, according to Ralph Bakshi), and was meant for the Captain Kangaroo Show, and as a replacement for Tom Terrific. Larz Bourne came up with the concept of the show and drew the first boards. It wasn't until mid-way in production that Deputy Dog became the star" However - I watched those cartoons synchronized to our language - so names are changed (but they still rime ).
  4. Indeed - I have old version and I remember it having problems with higher bit count. On website - it says it can handle 16/32 bit Fits though: But it also says that it can only handle monochrome fits: So I guess some fiddling - each channel stitched separately sort of thing would work on higher bit images. Alternative that I did not mention is ImageJ and different stitching plugins. https://imagej.net/Image_Stitching but that is sort of advanced use (people don't seem to like ImageJ for some reason)?
  5. ICE has been shut down and you can no longer download it from Microsoft website - you get "We're sorry, this download is no longer available." when you try to download it for some reason. Microsoft Ice website is still up and running though: https://www.microsoft.com/en-us/research/product/computational-photography-applications/image-composite-editor/ but when you click on download - you get the message. It is still available thru way back machine: https://web.archive.org/web/20190223050207/https://download.microsoft.com/download/7/3/9/73918E0B-C146-40FA-B18C-EADF03FEC4BA/ICE-2.0.3-for-64-bit-Windows.msi As alternative - you can try this: http://jaggedplanet.com/imerge.asp
  6. That's been interesting two minutes I read title of the thread, and I thought to myself - I have no idea what "Musked" means - so I went to google and got an answer: "To perfume with musk." and further explanation: "A greasy secretion with a powerful odour, produced in a glandular sac of the male musk deer and used in the manufacture of perfumes." So you can imagine what went thru my head, trying to picture what could have possibly happened that deserves such title. In the end - I'm sure feeling is the same - being "musked" by a musk and by Musk .
  7. Simple way to asses vignetting would be to attach extension tube to back of SCT / MCT (this is just to give you reference point when placing eye - to simulate field stop of some diameter) - and then place your eye at the edges of this extension tube and look to see how much of secondary mirror you actually see. For example like in this image: In the image above - it is clear that we can no longer see whole primary mirror reflection - part of it is obscured by primary baffle tube. Humans have hard time distinguishing less than about 10% (7% actually) drop in light intensity. Rough measurement of how much vignetting there is would be to compare where intersection happens - at half of primary (baffle tube clips image at secondary shadow) - 50% vignetting - 1/4 to the edge ~25% vignetting. If you can't see the whole image of primary when you put focal reducer in place - that means that scope is effectively stopped down.
  8. Here is interesting page that I found - comparison of 2" eyepieces and focal reducers on C5 - how much vignetting there really is: http://www.waloszek.de/astro_ce_c5_2z_e.php
  9. Before investing in reducer - I would check what is the size of illuminated field on C5. SCTs and MCTs have small fully illuminated field (or even partially illuminated field). Say your C5 has usable field of 25mm. That sort of field can easily fit in 1.25" eyepiece - take 32mm plossl, it has 27mm field stop. It has larger field stop that illuminated field that scope can provide (in this example). Focal reducer simply won't help there. Only thing that you are going to get is same FOV with shorter FL eyepiece. Given that the scope is F/10 - there is no need for short FL eyepiece - 32mm plossl will give you 3.2 exit pupil. Here is a bit more in depth discussion on CN: https://www.cloudynights.com/topic/360096-celestron-c5-and-63-focal-reducer/ some measurements even suggest that using focal reducer on scopes such as C5 and C6 lead to stopping down of aperture completely (C6 measured to be 133mm instead of 150mm) due to primary mirror baffle tube.
  10. I already did that, but here, to be more precise, I'll give you a bit more detailed analysis. First session frame shows just a bit of this effect. Top left corner shows astigmatism: With tangential component (one pointing toward the optical axis) dominating over sagittal component (perpendicular to first one - or along circle with center on optical axis). Top right corner is similar: with almost no sagittal component present Bottom left corner is again similar: Except, this time - sagittal component is dominant. Bottom right corner is almost spot on (it also suffers some astigmatism - but I think that is probably feature of reducer / flattener at edge of this sensor). 3 corners astigmatic - two of the with tangential dominating - one with sagittal - and fourth corner almost perfect. That sort of implies tilt as tilt means that spacing will not be the same in each corner and "missed" spacing usually leads to astigmatism. Second night image shows similar pattern, but this time - there is no winning corner - and corners change astigmatism orientation (here is image top left, top right, bottom right, bottom left): sagittal, sagittal with strange center - a bit to the side, tangential, tangential with a bit strange center. Now - it could be that there is no flop in the system that there is only tilt - and difference between corners is due to two different nights. If you disassembled your scope and put it back together next day and something changed in the mean time - like how you screwed things together - if some thread has double lead and you started threading with accessory 180° rotated or something. If you left everything together and just put it on scope next day - then there might be flow. It is best if you examine first and last sub of the evening to see if there is flop in the system. If first and last frame have same/similar astigmatism in corners - then there is no flop - but if it changes between first and last frame (do take into account meridian flip - as it will rotate image by 180° - you need to look at same camera corners not same image corners) - then it is likely that something is loose. By the way - here is handy tip for orienting camera the same way each session - orient it so that RA/DEC aligns with X and Y axis of the sensor (either "portrait" or "landscape" - whichever works better for the target). In order to do so - start exposure on bright star and slew the mount in RA at low speed (x1-x2 sidereal). Star should leave a trail in the image - orient the sensor until that trail is either horizontal or vertical.
  11. As far as color goes - don't really count on other images to provide you with clue of what accurate color is. You can do a little experiment. Take celestial object - like a galaxy and do google image search. You'll find many different color renditions of the same target. If you do similar experiment with known object - like can of beer that you are familiar with - you've seen it in person. 99% of images will show the same color. Those images were taken by random people - vast majority of which don't do photography seriously. Yet they manage to agree on color. Astrophotographers for some reason can't.
  12. Hi and welcome to SGL. Answer to the first question is probably - tilt. When you have difference in corners like that it means that sensor is not perfectly perpendicular to optical axis. If star shapes change between first and last frame of the session - it means that you have flop that is causing tilt. It is best if you have threaded connection between components. Clamping connection can sometimes be less than rigid and as the scope tracks across the sky - gravity acts on camera and pulls it in different directions - that causes it to slightly tilt in different direction. - solution to flop is solid threaded connection and removing flop/play from focuser if there is any - solution to tilt (if removing flop does not help) is tilt plate Answer to second questions is - yes stars are yellow - majority of them are. In fact, issue of color in astronomical images is not settled question and there has been a lot of debate about it. I personally think that there is a way to properly capture and render star color - other think that it is impossible (for some reason) to do so. In fact - these should be star colors when viewed on computer screen: There is much more orange / red stars then there are blue stars (yellow/orange make up 97% of all stars) - but are less bright, so when we image other galaxies - we can easily spot blue color in hot young stars as they are much brighter. In any case - in order to properly render star color, one must take care to calibrate and handle color information properly.
  13. In that case - go for it. I have ASI178mcc (color cooled version) and it is quite nice camera to work with. Here is lunar mosaic that I did with that camera (102mm Sky Watcher Maksutov) http://serve.trimacka.net/astro/Forum/2020-07-30/moon.png (right click / open in new tab)
  14. Hi and welcome to SGL. If you properly use both cameras - you won't see great improvement. ASI178mm is excellent camera - but ASI120 (and clones) - are fairly potent cameras as well. I would rather concentrate on getting the most out of your current camera - and only if you are not satisfied of how it works for some reason - then change it. Sharpness of the image has very little to do with camera used. It has more to do with the way you capture and process your images - and that is not likely to change when you change camera. One thing to be careful about is diffraction limited field of Newtonian scope. Your scope is F/5 newtonian. It is quite fast scope. It has very small diffraction limited field - around 2-3mm in diameter. If you use x2 barlow (and at 685nm - optimum sampling rate for 3.75µm pixel size is F/11 - so it's better to use x2 barlow) - that is only around 5mm in diameter. If you get camera with diagonal larger than that - coma will start to affect the corners. Btw - at 2.4µm and 685nm - optimum sampling rate requires even faster scope - F/7 scope - so its almost not worth using barlow at all. Maybe best course of action would be to get 150PL Newtonian - F/8 version. That will be much better scope for Lunar imaging.
  15. In fact - I didn't miss that - I just saw that I downloaded .fits - but forgot to do it. Again, sorry about that. Here is animated gif (I did not put in numbers of hours - but it is visible when right side changes):
  16. Hi, sorry, I totally missed the part where you asked me to do it. Will do it and post results.
  17. vlaiv

    M13 RGB

    Well, in principle - yes. It won't be as good as otherwise - when you measure it - but anyone will be hard pressed to see difference visually.
  18. vlaiv

    M13 RGB

    You can email developer - but don't worry about that too much - most of differences I'm talking about are academic rather than obvious. Here - I'll show you by example. I created two images with pure noise - with noise value of 1 unit (what ever unit we choose to be). Now I'm going to shift each by half a pixel, stack and bin - and measure result. Then I'm going to bin each, shift them by quarter of a pixel and stack - and measure result, so we can compare them. First is - stack then bin, second is bin than stack. Both are using same bilinear shift to "align" frames. Second one yields slightly better result in terms of noise reduction - but that might be just difference caused by bilinear interpolation as that in itself reduces noise and smooths image.
  19. vlaiv

    M13 RGB

    You don't need to fully understand process in order to be able to utilize it. Don't bother with resampling at all. It is useful in some cases - like when you want to get certain number of dots per inch for printing purposes or whatever. Bin your data to recover SNR if you are over sampled. - you can bin in hardware with CCD cameras or in firmware with CMOS cameras and you can bin in software. In most cases - difference is very small, so best guideline is to bin in software for CMOS and bin in hardware for CCD to get faster download and smaller files - and you can additionally bin in software files from CCD if there is need for it. - bin individual subs after calibration or bin resulting image after stacking. Bin individual subs if you use advanced resampling algorithm for alignment during stacking (like Lanczos or Catmull Rom spline or whatever) - bin stack result if you use bilinear interpolation (like DSS does). - don't try to enlarge binned image by up sampling (unless you need it for printing or whatever) - there is little point in doing so. - best method to determine good sampling rate is to measure FWHM and then divide that with 1.6. Bin so that you get close to this value. I personally think that it is better to under sample a bit rather than over sample a bit if you can't be spot on - but that is personal view.
  20. vlaiv

    M13 RGB

    It is not stars that are blocky - it is viewing them on 400% that makes them look blocky - because viewing it on 400% must resample image for viewing - and above software uses nearest neighbor to do that. When you up sampled image yourself - you've chosen some other type of resampling method that makes star look smoother and not pixelated. It is only nearest neighbor that makes stars look pixelated. Look at this: I use IrfanView to view images - above is just some image of stars. When I zoom in IrfanView 550% to see stars up close, I get this: Star is round, but now I'm going to switch off "good" resampling in IrfanView: Suddenly - that same star now looks pixelated: Pixelation is not feature of the image - it is feature of zoom method used to view image on certain zoom. Look at this: That is one of your resampled stars that you think is smooth (and it is at only 200% zoom) - but if I zoom it further (again using nearest neighbor interpolation for resampling) - it also becomes pixelated. No, it means that binning works as expected if noise statistics in the image follows certain distribution. If you mess with noise in the image in some way - binning won't work as expected. For binning to work - you need randomness and independent values. It is a bit like this - when you stack images, you get SNR improvement by factor of square root of number of stacked images. This means - stack 4 images and you get SNR improvement of 2. Now let's suppose that you have only 2 images to stack. Expected SNR improvement will be sqrt(2) = 1.41...., but what if we trick the system? Say we take first image and we copy it 2 additional times. Now we have total of 4 images, that should give us SNR of 2 - after all, we are stacking 4 images now, right? No - it does not work that way - three of those images are what we call - linearly dependent vectors - they are the same thing multiplied by a constant and offset by a constant - in this case, constants being 1 and 0 (multiplied by one and offset by zero). For stacking to work as you expect - noise must behave like linearly independent vector - meaning that noise in each image is completely random in relation to any other image in the stack. Same thing happens with binning - noise in pixels must be totally random with respect to noise in all other pixels (in that group). When you calibrate your image - you keep operations on "pixel level" - you don't mix pixel values. You subtract dark - pixel for pixel, you divide by flat - pixel by pixel. You don't mix pixel values. This keeps noise independent and binning works. Once you star aligning your subs - then you start introducing cross pixel correlation. Best way to stack images with respect to SNR - is to use integer offsets and no rotations - this is how it stacking was first developed - it was called "Shift and add" technique. As soon as we start using sub pixel shifts to align subs for stacking - we are using some sort of interpolation and we introduce correlation between pixels. Better interpolation algorithms introduce less of this pixel to pixel correlation. Some time a go I made a post where I addressed this effect and how choice of interpolation algorithm used to align images impacts noise grain. In it you can find comparison of few interpolation algorithms and how they act as low pass filter. Less they act as filter - more they preserve original noise - less effect on stacking there is.
  21. vlaiv

    M13 RGB

    @geoflewis Procedure that you did - bilinear up sampling x2 and then binning x2 (thus down sampling to original resolution) - has the same effect as simple blurring. Let me explain - first part, up sampling adds no detail, second part - binning does not work as you would expect - as you don't have genuine data but data produced by interpolation. Only genuine data that obeys certain noise distribution will benefit from binning. Otherwise - free lunch - up sample - bin/down sample - rinse/repeat - free SNR improvement - but it does not work like that. Only thing you did is introduce pixel to pixel correlation. Here, lets look at 1D scenario to understand what is happening. Let's just look at 2 pixels - A and B. When we up sample x2 - we "insert" another sample between A and B We will have A, X, B Depending on interpolation used, X is calculated differently. You used bilinear (or in 1D simple linear) interpolation -and since X is midway between A and B - it is simple average of the two (otherwise it would be c*A+(1-c)*B where c is normalized position between 0 and 1 - 0 being all the way at A and 1 being all the way at B. If we put it at midpoint we will have 0.5*A + (1 - 0.5)*B = 0.5*A + 0.5*B = (A+B)/2) That means we have A, (A+B) * 2, B Now we bin that data (with sum for example) - we bin first two up sampled pixels so our new pixel will be: A + (A+B)/2, some expression of B and C, 1.5*A + 0.5*B, some expression of B and C, ... You simply mixed values of A and B in the first pixel, mixed values of B and C in second pixel and so on ... This is pixel to pixel correlation and that: 1) reduces noise as noise is random - and pixel values now stop being fully random first pixel contains both elements of first pixel and second pixel - and is no longer truly random as it depends on "external" value of second pixel to some degree 2) you introduce blur. Blur is loss of sharpness. If A is very high in value and B is very low in value, and you mix the two - you get value in between - "smoother" value - contrast has been reduced. How so? Can you show this effect? Stars in the image can't become blocky. In worst case scenario - stars can be reduced to a single sample - single point in the image - what we think of single pixel. That point has no dimensions - it is sample and thus can't be neither round nor square. It is only when we enlarge image by up sampling to make us easier to see things - we see that single sample as some geometric shape. Actual shape depends on resampling method used. 1. Nearest neighbor will produce square (this is single sample enlarged by x20 using nearest neighbor) 2. Bilinear will produce diamond 3. Bicubic and other will produce something diamond shaped with a bit of ringing (cubic convolution image) 4. Lanczos will produce ringing - similar to airy pattern (this is actually Quintic B-spline, but Lanczos would be similar) It is actually not strange for high profile resampling methods to produce ringing that resembles Airy pattern - that is what happens when you remove high frequency components - be that in software or with actual physical device like telescope - both have limiting resolution.
  22. vlaiv

    M13 RGB

    Well, my emphasis on binning is to hit optimal sampling rate and recover SNR that you would otherwise loose by over sampling. That means leave imaging time as is - just don't over sample and you'll get better SNR - for free.
  23. vlaiv

    M13 RGB

    Again - that is correct - but very unlikely to happen. If you have stars that saturate single pixels - there might be couple of hundred of them in image - mostly in star cores. Majority of those will be much "stronger" than FWC of single pixel - and would saturate even large pixel. There might be only 10% of those pixels - so maybe 10 pixels per whole image that will be different between large pixel and binned small pixel. In another words - with image containing million of pixels - for only dozen or so binning produces different result than having larger pixels - that is definition of very unlikely - dozen per millions.
  24. vlaiv

    M13 RGB

    If one is careful to match exposure length to read noise of camera versus other conditions - then read noise is really non issue. Read noise becomes important only if it is significantly large compared to other noise sources. If it is small compared to other noise sources - it makes very small difference. Here is an example. Say you have camera that has 1.7e read noise. You image in light pollution that gives you something like 0.5e of signal per second per pixel. You image for 100s per exposure. In 100s exposure you'll get 50e of signal from light pollution. That produces ~7.0711e of noise just from light pollution. Noises add like square root of sum of squares. This means that "total" noise (here meaning just read noise and LP noise - as we simplified things to demonstrate) will be: sqrt (1.7^2 + 7.0711^2) = sqrt(2.89 + 50) = sqrt(52.89) = ~7.2726 That is increase in noise of 2.85% - you won't be able to tell difference in noise up to 10% by eye, so read noise makes much less difference in regular exposure. Now let's see what happens with binned exposure: With binned exposure we have 200e of LP signal instead of 50e (we added 4 pixels together so LP signal is 4 times as strong) and read noise is now 3.4e - let's do the same math. LP noise without read noise is sqrt(200) = ~14.1421e Read noise is 3.4e These two noise added - sqrt(3.4^2 + 14.1421^2) = sqrt(211.56) = 14.5451e Increase is now (14.5451 - 14.1421) / 14.1421 = 0.0285 = 2.85% The same! Although read noise increases by factor of 2 - so does LP noise (since LP signal increases by factor of 4) - percentage of increase remains the same - minimal If you determine exposure length based on read noise for single pixel - that exposure length will be valid for binned version as well, regardless of the fact that read noise increases. You just need to be careful in thinking - Oh, I'm binning - I have more sensitive pixels - I don't need as much exposure time as before. That is wrong for CMOS sensors - keep exposure length the same.
  25. vlaiv

    M13 RGB

    That is correct - and that is edge case. Say you have pixels that have 25,000 FWC and binned version that has "100,000" combined FWC. There are (minority) cases where large pixel with true 100,000 FWC would not saturate and binned version will. These cases are rare indeed. Worst case is that binned pixel will saturate at 25,000 - if one of 4 pixels is 25,000 in value and other 3 are precisely 0. But you can see how rare that case is if you work with over sampled data - no way that one pixel would be that high while surrounding pixels have 0 value. That only happens with highly under sampled systems - where whole star fits on just one pixel and even "wings" of it don't make it to other pixels. However you can easily see that if signal in original pixels is less than 25,000 - you get much larger FWC. Say you have about 20,000e in each of individual pixels - you can easily bin to get 80,000e in binned pixel. Same value as in larger pixel and much larger value than individual pixel can capture. This is much more common scenario with over sampled data - where we advocate use of binning. Here there is much more chance that if any one pixel is saturated and thus binned pixel is saturated - that would also happen with larger pixel. But again - saturation of pixels is edge case and is handled differently - by use of shorter exposures to replace saturated values.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.