Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

vlaiv

Members
  • Posts

    13,016
  • Joined

  • Last visited

  • Days Won

    11

Everything posted by vlaiv

  1. I have an idea. Looking at the image can be misleading. We don't know what sort of color processing has been done on the image. Was there color management involved and if image is boosted in saturation and such. Simple intensity measurement in R, G and B channels recorded - raw data, together with camera specification and filter type will be a good starting point to see what sort of color object actually has, and to try to determine something about the spectrum of the light given off by object. That would give us some idea of what object might be, or at least a clue about the nature of light it is emitting.
  2. Until you get EQ5 - which you can motorize / goto enable in DIY manner - look at this project: https://github.com/TCWORLD/AstroEQ here is another tip for reducing star trails from RA periodic error - shoot targets that have high DEC. RA error is most pronounced when you are tracking targets at equator while it is minimized when you track target near pole (in fact - when tracking exactly at pole - FOV does not move - it "rotates"). Choosing targets at high declination will lessen effects of trailing. Try imaging M81/M82 for example and maybe you will be even able to do 1 minute subs without trailing. If not, at least you will be able to keep most of 30 second subs.
  3. Of course I don't mind. Wanted to do something similar myself for quite some time now, but never seem to find spare time to do it. I wanted to make a website / blog kind of thing and do my best to explain and demonstrate things related to astronomy (mostly imaging/processing/theory). Hopefully I'll get it started sometime soon.
  4. Could be that the green thing is an artifact of some sorts? I mean, object is real, but the color .... Here is image that I found: Something similar happened to me when I imaged M101 - I also had bluish green blob thing: But on some other images it is not green
  5. Could you show an example? Are all subs affected, or have you seen some with dust shadows and some without (maybe check - that would be a sign of dust moving).
  6. Ok, I see no problem here what so ever. There is slight issue that red is quite attenuated compared to other two colors - which means that your flat source produces really cool light - white bluish - possibly led flat panel? But that can be fixed in processing by using proper color balance techniques. Here are results of examination - first flat split into components and stretched: Everything looks fine with this flat apart from weak red color, but that is like we said - due to light source. Notice that there are no dust particles on the flat file - so if we find some on light that will be a problem. Here is light without any processing: I don't see any dust shadows on it, but I do see vignetting and that vignetting really reminds me of the flat above - stronger to the right side of the image. Let's see what calibrated sub looks like: from what I can see here - vignetting is gone, and there is just a background gradient in this sub - going from left to right. Once we remove that linear gradient: That looks rather good?
  7. That looks fine. Can you post single fits of flat for examination, and maybe one of each - light sub, dark sub, flat dark sub so we can see if there is issue with calibration?
  8. I'm not Olly, but can give it a go Place bright star in the center of the frame (hopefully center of the frame is aligned with focuser central axis and rotating camera will result in rotation of FOV around this same point). Using only RA controls - move star off center some distance (more is better - increased accuracy, but make sure it stays in FOV). Do frame and focus subs repeatedly so you can see what is going on. If star remains on horizontal line that represents X axis of FOV center - you are done, if not, rotate camera until star gets to this horizontal line. I'll do some diagrams to explain it better Step 1 - put star in center: Step 2 - move mount in RA until either of the two happen: 2.a - star is not on X axis: or 2.b - star is on X axis: If 2.b. happens - star on horizontal line - you are done, RA axis is aligned with X axis of the image. If 2.a. happens - rotate camera checking star position until it comes to horizontal line like this: You can always check if your camera is oriented RA/horizontal - by getting bright star in center of the frame and then moving only in RA or DEC direction. If you move in RA direction - star should stay on horizontal line - move only in X direction. If you move in DEC - star should only move in vertical direction - stay on vertical line - move only in Y direction. Hope this helps.
  9. Do you by any chance know how is it calculated? That might shed some light if it is useful and under what circumstances. For example - it could be calculated like this: Take all pixels that are dark enough (sigma reject all bright pixels) and calculate standard deviation of such pixels - that is good approximation of background noise if there is enough empty background - it will not work well if much of the image is covered by nebulosity. Take all other pixels and calculate average pixel value - subtract average pixel value from first group of pixels (LP level) - this gets you "average" signal. So you conclude that SNR is average signal / background noise for example or something similar.
  10. Provided that FWHM is reported in pixels - yes. In this situation, I would actually bin x2 and then check FWHM and maybe scale by a bit, but I don't think that you would need to do any scaling as difference would be minimal. Do bin while data is still linear prior to processing. If you have 3.5" FWHM - then you should aim at about 2.185"/px sampling rate.
  11. I'm very skeptical of any tool that takes an image and reports single SNR value. Such thing does not make any sense at all. Every single pixel has its own signal and its own noise and every signal in the image has SNR value associated with it. In principle there is no way to find out SNR value of any pixel in the image. There is a way to get good SNR approximation if there is no much change in the conditions when image is shot - meaning SNR remains roughly the same over the course of the evening. In this case - you can take average of values for each pixel to be signal - that is what regular stacking does, and then take standard deviation of the stack for each pixel and divide with square root of number of subs to get the final noise of the stack (or just standard deviation without dividing with square root of number of subs to get noise per sub). This works if subs are the same - but they almost never are. As target moves across the sky during the course of the evening it will be both in parts of the sky with different levels of LP and will be at different air-mass meaning it will be attenuated by different amount - more or less bright. First changes total amount of noise, second changes both signal and noise parts - thus each sub will have different level of noise once we equalize signal. There are methods to actually both equalize the signal and measure noise - but not for single sub, rather for intensity range and sub as part of ensemble of subs. Back to original question and why there is no single SNR in the image. Imagine you have two galaxies in the image - one bright with signal 100 and other faint with signal 10, and background noise with value of 2. SNR of first galaxy will be 100/2 = 50, SNR of second galaxy will be 10/2 = 5 and SNR of background sky will be 0/2 = 0 Which one is SNR of the image? 50, 5 or 0? Imagine if you have background LP, so there is component of the signal that is unwanted over the whole image - let it be 2 as well. Now we have signal in bright galaxy to be 102 and noise 2 so SNR now is 51, similarly for faint galaxy SNR will now be 6 and for background SNR will be 1. But we did not change signals of galaxies, nor did we change level of noise (although background LP will change the level of noise - I'm just making a point here) - suddenly we now have different SNR values because there is some offset in values in the image. True SNR value can only be associated with each pixel and can be approximately calculated once you take into account: - that each sub is part of ensemble of subs - equalize signal levels in each sub - account for different level of noise in different subs (this is really hard part) - remove background LP signal (this part is also quite hard). Weighted stacking by single SNR value is not really the best approach to handle things.
  12. I'm not sure who you are referring to, but if that response was directed towards me, I was saying following: If you want a mount that you will be using for both visual in alt-az mode and imaging in eq mode and that is equally important to you, and you want to have something more lightweight and you have limited budget - then it is a good choice. If you don't have limited budget and you don't mind the weight - AZEQ6 is better mount for both imaging and observing - well, primarily for imaging. If you are limited with your budget and imaging is very important to you, but visual not so much - then look at Heq5, and understand that stock version is not going to perform to best of its ability, but once you tune it and mod it - it will perform better than AZEQ5 mount for imaging. From what I gathered, I think you will be happy with AZEQ5 as you want to both observe and do some casual imaging - this is why I linked that thread where AZEQ5 mount has been tuned a bit. I also asked on that thread if owner could confirm what sort of guiding results could be expected from AZEQ5 - and yes, I was right about that - about 1" RMS or a bit less. This is in range of stock Heq5 mount, so guide performance for casual imaging will be quite good (although that also means that you will need to tweak your mount as most SkyWatcher owners do at some point when imaging to bring the best out of their mounts).
  13. @FMA - there is interesting thread with AZEQ5 tuning and touch - up that might be interesting to you:
  14. What sort of guide performance do you get after tuning (in arc seconds RMS), and what guide setup are you using?
  15. Indeed - when I say there are different methods of resizing - I mean interpolation. Binning is sort of interpolation algorithm, but it works only on integer down sampling factor. In fact - binning 2x2 is mathematically equivalent to 50% size reduction + (0.5px, 0.5px) translation with use of bilinear interpolation method (average of 4 adjacent pixels is the same as linearly interpolating at point that is corner joining all 4 pixels - if pixels were considered to be squares). I mentioned one advantage of binning - no correlation introduced. Don't know how much you know about interpolation, but different interpolation algorithms use more than just surrounding pixels to calculate value when resampling - bilinear uses 4 nearest pixels, but bicubic uses 16 pixels around wanted point (matrix of 4x4 - as you need 4 values to fix cubic function, 3 values to fix quadratic function and only 2 to fix linear function). There are of course other interpolation methods and each of them will have different properties with respect to details in the image and impact on noise.
  16. Don't have much info on that mount. Most of info on mounts that I have, comes from reading about the specs and more important - first hand experience. I don't remember reading any first hand experience with that mount from people doing AP. It does not mean that it is not good mount. I know people owning older brother of that mount - AZEQ6 who are perfectly happy with their mounts. AZEQ6 performance is similar to the performance of EQ6 - maybe a bit better than old EQ6 and in line with EQ-6R because both AZEQ6 and EQ6-R have belt drive. AZEQ5 also has belt drive, so that is a plus. I've read once that someone found that lead that connects RA and DEC axis to be awkward. This one: Fact that this mount has both AZ and EQ operation does mean it is preferable in rather specific situation - where you want to use it as both Imaging and Visual mount (I'll address this point later with some additional info). As for imaging performance, I can only sort of guess what it will be like based on numbers. On this page, we have information on internals of this mount: http://eq-mod.sourceforge.net/prerequisites.html One thing that I'm slightly concerned is 0.25" stepper resolution - it is less than Heq5 class mount by almost double, and just a bit more precise than Eq5 class mount. With stepper motors this limits how well guided this mount can be. I think that realistically you can expect about 1" RMS guiding with it, maybe down to 0.8" RMS on a good night - which again means that max resolution that you can go with such mount is 1.5"/px - 2.0"/px - which is ok, that is sort of smaller scope - up to 700mm focal length with sensibly sized pixels (or some sort of binning to get you to that resolution). It looks like this mount has PPEC, and this is good. My Heq5 does not have this feature and I use VS-PEC in EQMod - which means that I can't really use my mount for AP and visual, because I would like to be able to use mount for visual without dragging laptop outside - with only hand controller. But that messes up my periodic error correction since it is not permanent and I need to park to exact position each time I finish using my mount otherwise PEC will go out of sync (no encoders on the mount). AZEQ5 has encoders and PPEC so it is well suited to be used in both roles at the same time (well not at the same exact time - but you know what I mean - you don't to do anything special if you image one night and observe the other in AZ configuration). If you want better precision in imaging then go with either Heq5 or EQ6-R or AZEQ6 - all of which are heavier mounts but offer better precision for AP. Mind you - stock HEQ5 / EQ6 is going to have vary greatly sample to sample in their performance and only once you tune it and mod it - it will be able to deliver best performance. I stripped my Heq5 and changed all the bearings, I did belt mod on it, replaced saddle plate and changed tripod and now it guides at 0.5" RMS, so if you are going to keep things stock - not sure if price premium (of EQ6 class) and weight are worth it if you have your heart set on AZEQ5. Hope this helps
  17. This is not easy topic, and in fact, many will point out (me included) that mount is the most important piece of kit when it comes to imaging. In budget category, and all the mounts you've listed are budget category mounts, we can roughly say there are two important things that you need to pay attention to: - weight capacity - accuracy of tracking Btw, I said budget mounts - but that does not mean all of these are poor mounts - a lot can be accomplished with such mounts - I have Heq5 for example and it serves me very well within its limits. Weight capacity - you need to have at least 50% of head room with respect of all your imaging gear to be placed on the mount (this is just a recommendation - more stuff you put on the mount - greater chance something will not work as it should - mount will not track properly, wind will be more of an issue, something will not work as expected). My Heq5 is rated at 15-18kg (depending on the source) and I've put as much as 15kg on it for imaging and it worked, but I did not feel comfortable with that much weight on it. Nowadays, I limit weight on the mount to about 10-11kg (maybe 12kg, but no more), so keeping about 50% of headroom is very sound advice. Accuracy of tracking - most of these budget mounts will have issues unguided. Most suffer from periodic error and greatly benefit from guiding. Longer the focal length (or to be more specific - higher sampling rate you use) - shorter your exposures will need to be to avoid star trailing. Exception to above are mounts with encoders, and in this price range, as far as I know, only iOptron offers encoders. Encoders are really expensive and it is much cheaper to get guiding kit. Guiding solves some other issues that encoders can't (at least not without very sophisticated software and building of sky model - something that is done in permanent setups as it is time consuming to be done on each session). Eq3 / Eq35 are really very basic mounts that can hold camera + lens and very small scopes. You will be very limited in exposure length with those mounts and will benefit greatly from guiding. Eq5 is step up from above two in terms of weight capacity and performance, but same holds - smaller / lighter scopes (up to say 6-7kg of total weight, btw when I say total weight - that does not include counter weights, it just means scope and camera and any other gear attached to it) and if possible - guide it. I would personally avoid AVX mount as I've read that it suffers from some issues that make it less than desirable imaging platform. Take this with grain of salt as I've never even seen one live, let alone used one. iOptron mounts are said to be good, and I've read many excellent reports on their performance. I would personally probably go for iOptron with encoders (EC model) if I don't plan to guide. Since you are in Texas, you'll probably get better prices on iOptron then we get here in Europe, so that is a plus. If you can - get iOptron CEM40EC, it is over your budget at about $3000 - but it is Heq5 class mount with 18kg payload and it has encoders, and is lighter weight. CEM25EC is EQ5 class mount - so keep weight up to 8-9kg on it (it has 13kg load capacity) and again it has encoders - which you want if you don't want to guide. That is about mounts, now about scopes and camera. ASI120mc is very good planetary camera and very good guide camera. You can use it for EAA/EEVA with 130slt, but that is about it. You can try imaging with it - I've done it and even managed some decent images, but that sensor is very small. If you are serious about imaging - you will need better imaging gear. Look into Skywatcher ED80 + flattener / reducer or Skywatcher 130PDS newtonian and coma correcotr and Canon DSLR. Something like used Canon 450 or similar will be very good option. Imaging is rather serious business if you want to do it right and there are many aspects of it, and best thing you can do is do a lot of research before you commit to particular gear. Book - "Making every photon count" is said to be very good book for anyone planning to get into astro imaging. Of course, SGL is place where you can read a lot about all topics that interest you and ask questions, and hopefully get decent answers.
  18. I think that you have all the gear that you need to do this - it just takes a bit of fiddling with the data that you capture in order to turn it into something that is useful. You have ASI120MC, and I presume you have small lens that comes with it - all sky lens? If not - it is really not that expensive to get one. With it you can create images like this: That gives you coverage of whole sky from certain vantage point. People sometimes use such images to create LP maps for their location - like this: You could mount such system on your car and then gather similar images from different locations, and then it is down to interpreting that data - you could do some sort of 3D visualization of that data as for each location you have total LP coming from a certain direction - multiple such points would help build a model of LP in certain volume of sky - and then you could try to simulate ground lighting that produces such sky glow. I mean this is advanced stuff as you would need to model scattering in atmosphere and types of illumination from the ground, but just a set of images like above one with map and direction of main sources of LP - and coordinates where those directions intersect could point out to particular LP source (a bit like triangulation of radio sources). For example, in above image it is evident that strongest LP comes from about 350° (which is not surprising as it is direction of my home town for location above image was taken and it glows bright ) - but in your images - that would be one line for intersection on the map.
  19. That 1.6 is related to Nyquist sampling theorem and is approximately sampling rate that corresponds to given FWHM. If FWHM is expressed in arc seconds, then dividing with x1.6 will give you sampling resolution in arc seconds per pixel. If FWHM is expressed in pixels - then dividing with 1.6 will give you "pixel size" - or factor to reduce your image by. In above example if FWHM is 4px then ideal pixel size is 2.5 - so you need to reduce image by factor of x2.5 Binning is form of resizing the image down and has some advantages over regular resizing down. It reduces noise better. Every resizing down of the image reduces the noise in the image, but different resizing methods reduce noise by different amount. Binning just adds adjacent pixels together and forms one large pixel out of group of 2x2 or 3x3 pixels. For this reason it can only resize down by integer factor - either by x2 or by x3 or x4, etc ... (it can't reduce size by x2.3 for example). However such reducing down has very good effect on the noise - it also reduces noise by x2, x3, x4 .... etc and it is mathematically predictable in the way it changes noise (no correlation between pixels, always exact improvement in noise and such) and it is therefore preferred way to do things in astronomy (science side of things).
  20. Hi, you need not worry about display size - it will always be "fit the screen" by default, regardless of resolution of underlying image. Some people, me included, like to view image at 1:1 setting or 100% zoom, even if I need to pan around and that is possible if you open image by itself in browser - I usually hit right mouse button and do - open in new tab ... At first image will be scaled to fit the window again - but simple click on it will expand it to full size and you can pan around. I mention above because of what I'm about to say next. There is proper resolution for astronomy image, or rather range of proper resolutions. Sometimes with modern cameras and larger telescopes (longer focal length) people make image that is just too zoomed in when viewed at 100%. Stars are no longer small dots but rather "balls" suspended in space and everything starts to look blurry on 100% zoom viewing. It will still be nice image to look when it is viewed scaled to screen size. Such images are worth down sampling to appropriate size as it will make viewing at 100% more enjoyable. Luckily there is simple technique to determine proper sampling rate for the image - you need to measure star FWHM (Deep sky stacker gives you this information for each frame) and divide that with 1.6. If your star FWHM is 4px then 4/1.6 = 2.5 - you need to resize your image to 2.5 smaller size. If you get number that is less than one - just leave image as is - don't enlarge it as it will be blurry (enlarging won't bring missing detail back in).
  21. Why does this bother you? That is perfectly normal for sensor that is not cooled - you take dark frames and after dark calibration you should not have any more hot pixels remaining and if you do - dithering and sigma reject will sort it out. Btw - here is screen shot from a piece of my dark frame (in fact master made out of 16 subs cooled at -20C): Plenty of hot pixels there as well. It is not about how many hot pixels you have - it is about how hot are they and if you can calibrate them out. Can you post a single raw file dark sub, or better - maybe couple so we can try dark/dark calibration and see if these hot pixels calibrate out?
  22. Probably I mean - as DIY then sure, if that is a challenge for you and you like that sort of challenge - sure. What I'm trying to say is that you should not go for it based solely on expectation that it will provide good tracking for such a large scope because it is better design than EQ platform. Unless you are very very skilled at building things - odds are that you will have large PE and only planetary and lucky DSO imaging will only be possible anyway.
  23. Then have a look at this: It is not fork mounted and needs to be rewind for each use - you get about 30mins to 1h in one go, but it is much easier to make (and cheaper).
  24. EEVA section exists here on SGL, but it is often called EAA - Electronically assisted astronomy. EEVA stands for electronically enhanced visual astronomy (or similar, I'm not 100% sure). EEVA is a bit broader term than EAA as it includes night vision devices, where original usage of EAA term was using of video cameras and recently cmos cameras and viewing recording on monitor / computer screen. It is very close to planetary style / lucky DSO imaging. In planetary style lucky imaging exposures are very short - order of 5 to 10 ms. For Lucky DSO imaging - exposures are kept short at about 1-2s, while EEVA / EAA or "Live Stacking" as it is sometimes called uses exposures longer than that, but still shorter than regular DSO imaging - from few seconds up to dozen or so seconds (sometimes people use half a minute exposures). Point with EEVA is to watch image of target build up in (near) real time - so you observe for few minutes (and stack up to 30-40 short exposures) and the move on to different target. This requires goto and computer control to locate next target, but in principle you can move scope by hand. In any case - do search for EQ platform as that is going to be by far easiest solution to either purchase or DIY. It will let you do most of things mentioned here - Planetary for certain and Lucky DSO imaging. Depending on tracking accuracy of EQ platform you might even be able to do EEVA. Another solution that you might want to try is friction gear instead of worm. That one has both advantages and disadvantages compared to worm.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.