Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

vlaiv

Members
  • Posts

    13,106
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. Don't know any tutorial on how to do it, but I can describe sensible steps to perform the operation. In principle, I would be able to do all steps but one - for which I'm not sure if I could do it without first searching the internet for some ideas or even small tutorial on that particular part. Problematic step is layer mask of stars - you need to be able to isolate stars in one image, and that is basically it. DSS for example has the option to create star mask - that could be very helpful (have not tried it, but I know it's there). Other astro software probably has something similar. Anyway, you get your LRGB/RGB image and create nice looking star field. There are couple of ways to do it, I prefer RGB ratio method since you don't have to worry about color being lost when doing stretch. Next thing is to process your NB image as you would and in the end you take RGB image, apply star mask and layer on top of NB with 100% opacity where stars are (background should be transparent). Just to point out something - your NB and RGB images need to be aligned / registered to same reference frame (that is sort of obvious, but just in case ...).
  2. Btw, here is rendition from Hubble. I'm posting it just for you to get the idea of what sort of color balance would be good: You have good color in core - mostly yellow stars. Spiral arms are a bit less blue, and more "whitish". Dust lanes should be red/brown.
  3. I'm not sure what are you asking. In first post, right image looks very, very good. Maybe a tad too blue, but I'm ok with that as it shows young stars in spiral arms.
  4. I believe that shadows are as follows: In second image focuser shadow is not visible because it has been racked out to get out of the light path. Out of all of these, I would only be worried by secondary clip. Btw, welcome to SGL!
  5. I have slightly different method of locating M31 and M33 than given above (maybe it will be useful to someone): My approach is to find Cassiopeia and then find string of 4 stars underneath it (all bright and can be seen even in quite a bit of LP). Going from left to right, identify 3rd star and then start "raising" by first finding single star and then pair of stars above (all marked in the image). M31 is just slightly "above" two stars. M33 is about the same distance as pair of stars, but in other direction ("down").
  6. I'll give you my view on this - it might be useful to you. At 400mm in case of ED66, ASI1600 is going to give you 1.96"/px. In my view this is almost optimal sampling rate for 66mm scope. You can expect to get more than 3" FWHM stars with such scope (for example 3.2" FWHM in 1.5" seeing and 1" RMS guide error). We can say that FWHM / 1.6 is optimal (or close to optimal) sampling rate, so 3.2" FWHM means 2"/px sampling rate. Therefore ED66 and ASI1600 is very good match for wide field shots. You can even add flattener with reduction factor and you won't be undersampling much if at all (in just a bit poorer seeing you will be at optimum again). With my 80mm at F/4.75 (~380mm FL, F/6 native with x0.79 reduction) I happily use ASI1600 and I don't feel images are undersampled at all - results look very good. With 130PDS at 590mm you will be 1.33"/px. This will actually be closer to oversampling in most circumstances than undersampling. You need ~2.13" or less FWHM in order not to oversample. In same conditions as above - 1.5" seeing and 1" RMS guide error, at this aperture size you are likely to get around 2.9" FWHM stars. To utilize 1.33"/px to full potential at this resolution, you would need either 0.5" RMS guide error in 1.5" seeing, or perhaps 0.8" seeing with 0.8" guide RMS. With 1" RMS guide error it is virtually impossible to reach 2.13" FWHM regardless of the seeing at 130mm aperture. With 150PL and 1200mm focal length, you will be at 0.65"/px. If you bin x2 you will be at 1.3"/px - just look at above to see if that is undersampling. I would argue that if you have worse guiding results with such a long scope, you will be still oversampling by quite a bit. With this scope I would rather go for x3 binning than x2. You might say - what would be the purpose as you already have setup that does 2"/px (ED66), but these two approaches will be rather different. ED66 is going to do wide field stuff, while 150PL is going to be really fast in comparison to ED66 (when binned x3) but will be of a quite narrow field - so suitable for small targets. This is good example how slower scope can be "faster" than fast scope. If you observe these two setups in different way - aperture at resolution way, you can see how much 150PL will be faster. You have 150mm vs 66mm both sampling at 2"/px. That is more than twice aperture by diameter (more than x4 light gathering area) and in terms of SNR, 150PL will be at least twice as fast.
  7. What scope(s) do you plan on using it on? Or rather, why do you feel it will undersample?
  8. I think that, depending on your latitude, two cuts and welds should be enough to modify the pier. At least that was what I was thinking as I'm at 45 degrees (give or take few minutes) - I need one at 45 degrees and one at 90 - that should be easy thing to do I reckon.
  9. Would it depend on difference between stellar magnitude and sky surface brightness? (something tells me that it should, but I did not give it much thought).
  10. This is the one I'm not comfortable with. Both due to nature of light (that it comes in photons) and due to fact that there is no continuous measuring device that will give exact x/y position of a photon hit (nor could there be - uncertainty principle). For me it is much easier to consider finite pixel size (as one more variable) and it also leads to correct results in terms of calculated / obtained SNR in given time.
  11. Not sure if there is a simple way to say it and still be 100% correct. You can say that it alters F/ratio as it creates the light beam that has properties of altered F/ratio - system behaves as if it has shorter focal length with the same aperture. Light beam converges as if it came from shorter focal length and same size aperture - in that sense it "changes F/ratio" - whatever you put after focal reducer "experiences" different F/ratio. On the other hand saying that F/ratio of the scope changed is not correct either - scope still has certain focal length and certain aperture. Similarly you don't change size of sensing element - it is still the same size of pixel in physical terms. We might say that "mapping" between angular size and physical size of pixel changed. Similar thing happens with binning - physical pixel size stays the same, but "logical" or perhaps better termed "effective" pixel size is increased. In similar way reducer alters "effective" focal length and "effective" F/ratio of the system. Best thing to do when having this requirement of capturing single star and doing measurements is to make sure star is covered by single pixel. Star profile in focal plane plays important role here, and in principle best results are when you have single read noise "dose" per star intensity readout - that would mean placing it on a single pixel. Second important thing is of course aperture - you want as many photons in given time coming from star to be captured.
  12. Complete nonsense .... have a look at this thread, same text appearing all over the internet as far as I can tell ... Comments in the thread should shed some light on the matter ...
  13. I like the hint about choice of units Train is one bridge distance away from the start of the bridge, and it is moving at 18km/h (x3 speed of that man) As for F/ratio myth - whether there is in fact myth or truth depends on formulation of that F/ratio myth. If we examine the statement: "Having two scopes of different F/ratio, there is a relationship in corresponding times for attaining given SNR which depends solely on F/ratio of said scopes", then I have to say - it is a myth.
  14. O, but it is very different thing In principle they are the same thing, but normal / daytime photography is operating "subsonic" and AP is operating "supersonic" - two different regimes of the same thing (air drag). With day time / normal photography you have following "restrictions": Target shot noise by far most dominant type of noise, or plenty of signal. Dynamic range of target is often quite small. Dealing with relatively short exposures. Capturing image in single exposure (there are of course exceptions - like depth of field stacking, ultra fast exposures and such, but let's not consider those "normal" photography - again special regime). With AP things are quite different: Target shot noise is often behind other noise sources (if it's not then one is very lucky to have pristine skies, almost no read and thermal noise and relatively bright target) - minimal signal per exposure (often less than one photon per pixel per exposure). Dynamic range is couple orders of magnitude larger than in normal photography. Dealing with larger number of long exposures. So you are quite right to say that in principle they are the same thing - sensor / lens / light, but due to different regimes of operation, they are practically very different.
  15. No need to define F/ratio in any other terms than it is defined - as a ratio of focal length to aperture. If you put a physical constraint on "photon counter" - in terms of its absolute size being always the same and finite, then yes F/ratio of two optical systems will indeed define respective speed in terms of average number of detected photons per unit time. I'm just saying that special case can't be used as basis of understanding of whole class of phenomena. This comes from day time photography and usage of lens. It works in that domain because two important facts - target shot noise is dominant source of noise (plenty of light bouncing off object being photographed) and when working with single camera and exchanging lens - you are working with fixed size pixels. Daytime AP does not know concept of binning to change pixel size. When I first started thinking about AP I quickly realized that F/ratio and its use is very limited in determining performance of imaging system. I fiddled around with math and concluded that aperture at resolution is much more "natural" way of thinking of speed of acquisition in AP. I tried to formulate "one number says it all" approach, but while doable - it is too complicated for quick mental comparison (square roots and such). After all of that, I concluded that aperture at resolution is better approach because it lends itself to more or less "natural" way one would choose scope / camera combination. Instead of thinking of F/ratio - speed of scope, one should go the other way around, and follow similar steps to these: 1. I've got a mount that is capable of X precision in tracking / guiding 2. My average or good seeing (depends on what you base your decision on) is Y 3. From 1 and 2, I can conclude that my average FWHM will be Z and therefore I need to sample at resolution S to capture W% of data available (in terms of resolution) 4. I have choice of cameras .... (and then you think of QE, sensor size, technology, pixel size....) and choice of scopes (regardless of F/ratio) - which combination (including binning) will offer me most aperture at target resolution S and still fit other criteria (mount capacity, budget, preferred optical design, ....)
  16. No, binning provides you with exact signal improvement as number of pixels would suggest. Add 4 pixels together and you will have x4 signal strength (for uniform signal over those four pixels). It is SNR that doubles in that case - same as when stacking 4 images - SNR improves by factor of SQRT(number of stacked samples). Sensitive area of pixel is "included" in QE of sensor, so in principle you don't have to worry about that - total area of 2x2 in relation to total sensitive area will be the same as with single pixel, so binning does not change QE over single pixel (neither increases it nor decreases). All of this is of course for extended sources - surface brightness. With point sources like stars same applies except that you need to factor in PSF of optical system (there also larger aperture has an edge as it will provide tighter PSF for same conditions versus smaller aperture). We don't need to compare different CCDs. Here is another example: You have two 6" scopes. One is F/8 and one is F/5. Use the same sensor on both of them. Only difference being that you bin x2 pixels on F/8 vs no binning on F/5 - which one will be faster? F/8 one will be faster. In this case we have same aperture but different resolution (pixel gathering area). F/8 binned x2 will be of the same speed as F/4 if both scopes have same aperture.
  17. In that PDF that I linked, there are couple of projects that are worth looking up to see what is involved. One of them is meteor detection. Another interesting one can be Itty-Bitty radio Telescope (IBT) - that one is just satellite dish and receiver (satellite detector). Maybe that second one could be interesting starting point as you can "upgrade" it bit by bit. It is just a dish connected to satellite detector - one that "beeps" when you have a signal. So you point dish to something and detector beeps - you have a detection of a radio source! Most often you start by finding Sun this way. Possible upgrade paths would be - mounting dish on EQ mount so you can use handset or computer to point it to wanted location. Another one would be connecting satellite detector to computer via some sort of sampling device - like simple external (USB) audio card that can sample input, so instead of listening to "beep" of detector - you record it via computer. Look here for resources: https://opensourceradiotelescopes.org/itty-bitty-radio-telescope/
  18. Do you just plan to get basic understanding on the matter, or are you interested in doing some radio astronomy? Radio astronomy is one of those fields of astronomy that you can't simply have "hands on" approach like observing. You need at least some level of technical background to be able to understand what you are doing. Results of radio observations are often just measurements of sorts - graphs and charts, rather than anything you can instantly see or take a photo of. You can produce images, but it is rather complicated, and as amateur with any sort of gear that amateur can house and operate - it will be very low resolution type of the image, so you can in principle only do "wide field" images - like that of Milky way.
  19. One does not need to change the camera, you can bin your pixels to get larger collecting surface / lower sampling rate. Don't have to be doubtful about it - do the math and you will see. Here is very simple example that is enough to show this. Let's say that we have two scopes - one with 8" and one with 6" aperture. We match them with sensors / pixel sizes, so that both give 1"/px. This means that one pixel will gather all photons from 1"x1" patch of the sky that were collected with respective aperture and focused on that pixel (all photons fall on aperture as parallel rays and they are then focused on that pixel so all photons from 1"x1" region that aperture collects end up on that given pixel and are accumulated as signal). Next step is fairly easy one - 8" will collect more photons than 6" -> Pixel on camera attached to 8" scope will gather more photons and have stronger signal than Pixel on camera attached to 6" scope. Better SNR in same time = same SNR in shorter time. This is why I always say - don't think in terms of F/ratio or speed of scope, but rather thing in terms of "aperture at resolution". More aperture on same resolution will be faster. That is why are larger scopes better for DSO imaging, or rather primarily reason if you can pair them up with sensor to give you wanted / reasonable resolution (possible binning included in consideration).
  20. How simple does it need to be? Quick search gave this, and it looks like fairly basic level: http://www.radio-astronomy.org/pdf/sara-beginner-booklet.pdf I'm no expert in Radio Astronomy (far from it), but it is interesting topic, and I've done some research (and thinking about it) myself, so I could possibly be able to answer some basic questions.
  21. Simply put - this is wrong. Many people forget pixel size - or assume unchanging pixel size. F/ratio does not determine "speed" of setup - meaning it does not determine how long you will expose. Imagine 8" F/10 scope and 6" F/5 scope for comparison. First one is used with ~11um pixel camera. Second one is used with ~4um pixel camera. I added ~ sign (meaning about) because I wanted to emphasize that in both cases you have around 1.15"/px sampling rate. Now you have 8" of light gathering vs 6" of light gathering - both mapping 1.15" of sky to size of pixel. 8" will win on speed, even if F/10 is "slower" scope than F/5 because it is larger aperture. I agree that you will have wider potential field with shorter focal length within limits of optical design (fully illuminated and corrected circle).
  22. Ah sorry, this is even better:
  23. Sorry to say - that article is full of BS (pardon my french). I'm not saying that to deter you from imaging M31 - I'm just saying for the greater good of readers. Due to motions of two galaxies and motion of our solar system inside our galaxy (orbiting galactic core), actual "distance" is changing every second (if we assume distance to be a distance between earth and single point in Andromeda - like galaxy core or center of mass or whatever). Speed of change is such that that you won't notice anything on wast time scales because total distance is huge and any small change will be very small fraction of total distance. M31 is about 2.5-3 degrees so it is about 5-6 times full moon. I do encourage people to read the article to have a laugh, as some seriously funny language constructs were used, like this:
  24. There could be a number of reasons for your observations - not saying that it is certainly the case, but I'll list them anyway: - Large stars: Most people don't realize that when using OSC cameras they are in fact sampling at two time lower sampling rate than pixel size would suggest. After that, interpolation of sorts is used to "fill in the gaps" unless super pixel debayering is used, or more exotic - splitting of sampling matrices into separate subs. This means that you are artificially "increasing resolution" (trying to present image at twice sampling rate by effectively rescaling subs). That, coupled with the fact that OSC sensors have smaller pixels which in it self can lead to oversampling, means that stars will be "bigger" when viewed 1:1. Not CMOS fault - but rather due to way they are used. - I would throw in fact that CMOS sensors are cheaper than CCD and because of this more people opt for such cameras that can't afford precision mounts, but that might be moot point given that you have experience with hosted gear and I suspect that mounts used in such setups won't be average "consumer" level. On the matter of color - there is big difference in how mono + LRGB and OSC handle color. Just look at filter response curves. This however does not mean that one of those is "true" color. Both can and should be color calibrated and suitable transform found to represent true star color as neither will do that "out of the box". What I suspect is happening is that you are used to the way mono+LRGB renders color (without calibration) and it has become "de facto" color standard on how images should look like. If you take OSC color rendition - it will look wrong (in comparison to what you are used to). It is in fact the case that neither are rendering proper color without color calibration. If you do color calibration on both - you should get same colors (or very close colors - depends on camera/filter gamut compared to display device gamut) in both.
  25. Could be due to some of the issues I mentioned - image looks like it was denoised a bit? At 1.18"/px it is probably a bit oversampled as well. In general that is not considered resolution that is oversampling "by default" - up to 1"/px can be "pulled off", but you need aperture for that. With only 65mm star FWHM is going to be at least 3" - 3.5". That is more in 2"/px territory. Do you happen to have stats on FWHM in your subs?
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.