Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

vlaiv

Members
  • Posts

    13,056
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. Reasons that come to mind: 1. holding the shape - rigidity (carbon fiber might bend under its own weight for example). 2. ability to be polished to needed degree (ease of figuring it) 3. Thermal stability (but also thermal expansion, some glass types are better at this then others - have expansion factor close to zero)
  2. Not sure if that helps, but I forgot to mention - I have same modification on my HEQ5 - Geoptic dual saddle plate (vixen + losmandy) and their puck as well. Like I've written - it balances with 9kg scope + accessories ...
  3. Indeed, interesting question. I need to think a bit deeper on how alignment stars correct different issues, like mount not being level and with EQ - polar alignment error.
  4. That is kind of interesting, since your setup should not be that heavy. I've found that I need third counter weight with 12kg+ setup - which I would not recommend anyway on Heq5. I did use it like that, but it was at its limits (8" F/6 tube from dob + rings + camera + 60mm guide scope and guide camera). Esprit 100 is listed to weigh at 6.3Kg. ASI1600 is fairly light weight camera. Not sure how heavy is your filter wheel. Guide scope should also be light. I balance ~9kg scope (RC 8" + 50mm M90 extension and upgraded focuser), ASI1600, filter drawer and OAG and additional 1Kg weight to setup in DEC with 2 5kg counter weights without a problem. Counter weights don't even go down to the end of the shaft. Maybe you can optimize your setup a bit? In the same way that you can get disbalance by moving CWs up and down the bar - you can do the same on scope side. If your guide scope is mounted on top of your imaging scope, can you change the distance between the two - can you get your guide scope closer to the imaging scope? Where is your filter wheel "pointing"? If body of filter wheel is rotated so that it is "up" or away from mount - bring it down so that it "hangs" rather than being "up" (silly usage of directions, I know, but hopefully you will get what I mean). Btw, it is better to add more weight closer to mount then use extension bar as far as guiding is concerned. Setup will be heavier and there fore more solid, it will indeed produce more friction on bearings, but arm momentum will be smaller and mount more responsive to guide commands.
  5. No idea how important 3 star vs 2 star is for accuracy, not if there is any difference between Alt-Az and EQ in needed stars for alignment, but maybe usage of the scope could provide some clues? AltAz is not meant for imaging, hence there is no great need for alignment precision as one won't be working with small FOV and accurate finding / tracking of objects. Maybe rationale is that for visual, people will use different eyepieces - wide field ones once scope moves close to target and then adjust position with hand set or higher magnification views? So it could be the case that 3 star alignment is offered on EQ mounts for moments when you need greater precision - like when working with relatively small FOV (largish focal length and smaller sensor size) - so that you can still be on target after slew?
  6. I'm in favor of using all the data, but for that to happen, you need some clever algorithms. I know that PI has sub selector and that it can assign weight based on estimated noise. So maybe you should try that. On the other hand, having worked on algorithms for exactly that purpose, I can tell you that there is no single weight for entire sub that will produce optimum result. Combining different SNR sources requires weight per SNR and SNR depends on both signal and noise, so measuring noise in one part of the image will not give you complete picture of SNR across the image and how to combine different subs together for optimum result. Have a look here for actual comparison of rejecting subs vs stacking them with different weights (but my approach is different than PI sub selector so you might get different results if you work with PI): I also have part of stacking workflow developed that sort of deals with LP gradients. Let me see if I can find that thread as well. Here it is: I have efficient residual gradient removal tool as well If you wish, you can post your subs (or put them somewhere online like google drive or such) and I can put them thru my workflow to see what we can come up with in terms of end result
  7. Can you check following: What is bit precision on that .png file that you are downloading? Is it 8bit or 16 bit? I can't tell if SGL website engine is doing anything with attached image (like optimization for web display). If it is not - ones that are attached above are 8bit, which is very very bad thing. You want 16bit format for your subs. Also, what sort of gain / offset are you using when capturing? If you have issue with attaching .png file so that it stays the same (unaltered by SGL engine) - take one sub of m31, put it in zip archive and upload it. Btw, it is better if you capture your subs in fits format. It is format meant for that, so it will contain all the capture parameters (provided capture application writes those along with the image, and it should).
  8. It's not something that is hard to do and you need to take shortcuts because of that. Why would you do it differently if it is a proper way to do it? It really depends on characteristics of the sensor. Flats don't correct for dust and vignetting only, they correct imperfections in QE on pixel level as well. For example, look at this flat (crop and stretch to show issue): In left bottom corner there is a "small" dust doughnut here cropped to 1/4 of its size (just to explain why there is ring there and for size comparison), but important thing is checker board pattern in flat. That is pixel to pixel variation in QE due to manufacturing process - maybe electronics between pixels or shape of micro lens - it does not matter. What matters is that there is a bit of QE difference on pixel scale. When you downsample such flat, unless you are very careful in the way you do it, you will introduce correlation between pixel values and you will no longer have true representation of pixel QE levels. Difference between pixels is something like 30ADU per 1600 ADU, so it is ~1.9%, not much, but I would rather avoid additional noise that would come out of messing up per pixel QE with downsampling if I can - and in fact I can - just by following above rule (which again is not really any harder to do than downsampling flats).
  9. Could you by any chance upload fits straight from camera of that M31? That looks rather "ok" if it is unstretched. Quick stretch of 8bit data from the image you attached looks quite ok: Could you give more info on the kit used to image this? What is the focal length of your scope? This was ASI1600MM but without cooling (I mean camera is cooled but cooling was turned off or is it model without cooling?), if so what settings did you use (gain, offset). Just for comparison - here is 60s sub I've taken with 80mm scope and ASI1600 cooled at -20C also unstretched: Not much to be seen either. Even when stretched it is not much better, and certainly not as good as your 5 minute sub even at 8bit: I'm just wondering how much of what you see in the image is due to way image is stretched, and if you don't have any issue at all with scope and camera and just need to bring images to equal footing to compare them.
  10. I think that you should first remove background gradient before trying to compose image.
  11. Any sort of mix of only two sources will be very Bi colored With different mixes of colors from two sources (like percentages per channel) - you are just changing hues of two colors that compose image. You need some fancy way of fiddling with your data in order to produce tri color image from only two sources. One of the ways to do it would be - assign percentage to channel based on intensity, so that not all pixels contribute in equal measure - for example if pixels are over some threshold value - switch channel you are assigning values to, or similar. That would in effect mean, for example - strong Ha signal is mapped to yellow (has both red and green contribution), while weak Ha signal is mapped to red (no green contribution), OIII signal is mapped to blue. That would create "tri color" image from bi channel data.
  12. That adds another layer of complexity. You now need to worry about neighbors mistaking you for a burglar (with those balaclavas on your head and dark outfit ).
  13. If you were in the open (as one would be for observing session) - you really need not worry about any effects of odd spliff being lit up near by. There is simply no chance it will have any sort of effect on you - both on your health or state of mind. Concentration of "active matter" reaching you is basically 0 (fact that you can smell it tells something about how sensitive our nose is - it needs very low concentration of molecules that we identify as smell to smell something, and active molecules (THC / CBD) are not the ones you smell). Unless you are bothered by smell of it, or have any other reason to retire from observing.
  14. Hm, point of HOO is it's title, isn't it? Ha - red 100%, O - green 100%, O - blue 100%
  15. Don't think 20 minutes is enough for dew to start forming. It does depend on how your scope was stored, but if it was not already close to ambient temperature (practically being outside, or at least in a wooden shed / garage that provides little to none warmth compared to outside temperature) it will take at least that long to bring scope close to ambient, and I'm sure it will not have time to cool down more than ambient (that is needed for dew to really start building up). I would say that ice on sensor is ruled out by this:
  16. That is something you should always do, regardless of any binning applied (before or after) - use exact same settings for calibration files.
  17. It would be very helpful if you could describe in which way was the image poor compared to previous attempt. Maybe even post images for comparison? If all things equal, I would say that 300s image can be poorer because of cumulative guide error / seeing so it can be blurrier than 15s one. Although you say that your guiding was good (any chance of guide graph screen shot, or at least RMS figures? Maybe guide log from your session? What do you guide with?) it could be the case that 300s is true picture of what your guiding is like, while in 15s you don't really see effects of guiding/seeing- too short exposure. If image quality suffered in signal department - not enough detail visible in 300s exposure vs 15s exposure, it can be due to number of reasons - level of stretch (how do you examine your images, is there auto stretch applied?) - in this case 300s is in fact better sub but you are not seeing that on your screen, or it can in fact be due to dew - it will very much kill any signal to the lowest possible level (only very bright stars) and hence SNR. You can examine if dew is the case by looking at subs across your session - there should be evidence of things getting worse (unless the scope managed to dew up before you even started taking subs).
  18. On one hand, I think it is down to mental attitude. For some strange reason, I've never been afraid to walk alone at night anywhere in my city. A lot of people that I know have some degree of unease if not fear of going alone at night. It is not case of "it can't happen to me", nor do I live in peaceful city, far from it, bad things do happen at night - it is more that I don't ever think about it, it simply never occurred to me that something might happen. Even when I consciously think about it like now - it simply won't generate that sort of worry or fear next time I need to go somewhere at night alone. That of course is a subjective thing, and far away from objective dangers (but it does help to be relaxed if you decide to go out observing). I was going to propose carrying small radio / usb player and listening to some music or radio shows quietly so you don't disturb other people. Some threats out there are best avoided if they can spot you first, much like in the wild. People trying to steal something will like to stay concealed and they will avoid you if they are aware that you are there. Last thing you want to do is to surprise them, and it is easy to do so as you are sitting there in absolute dark without moving . That is when they can act in violent manner (more because they are surprised then because you pose any real threat to them). Problem is that above behavior draws in another kind of danger - those people with aggressive demeanor, having had too much to drink and generally looking for trouble or someone to bully and act out their own frustrations. These sort of characters will be drawn by sound. Even worse, people looking to rob someone will do the same if you look vulnerable enough in their eyes, and again it does not help that you are sitting there in absolute dark, not moving and minding your own business.
  19. Huh, that one is tricky. Not sure if you will reach any sort of consensus on that question. Some people prefer newtonian (dob mounted), and others SCT on either alt-az mount or EQ mount. Each one has pros and cons, and only if you examine pros and cons will you be able to decide which one is better suited for your you (or your friend, whoever is choosing). In optical terms, well, they are of the same aperture and in principle newtonian will have very slight edge if both are made to same standard of quality (smaller central obstruction, one less optical surface that can introduce aberrations / light scatter, SCT have some spherical aberration when you focus away from perfect focus position - which is possible by focusing mechanism as it moves primary mirror instead of eyepiece). However, most sample to sample variations in both scopes have larger optical quality difference than those listed above. I can start list of pros and cons, and others will probably add to each from their own experience. Newt (dob mounted) pros - price - wider field possible - faster cool down time, less issues with dew (no front corrector plate) - above mentioned slight optical edge (which might not be there in actual samples, but I list it anyway because I'm in newtonian camp ) Newt cons (again dob): - harder on eyepieces (but really not that much) as it is F/6 vs F/10 of SCT - harder to reach planetary magnifications (often barlow is used) - Bulkier / heavier - constant nudging may bother some people (although you can get either goto dob, or eq platform for it, or alternatively mount dob on EQ mount or motorized Alt-Az mount. If you put it on EQ mount - Eyepiece will end up in awkward positions most of the time so you will have to rotate the tube) STC pros (I'll try not to repeat above ones): - not sure what to put here that has not been mentioned, but probably weight / compactness / portability of OTA (although I did mention that newt con is that it is bulkier/heavier than SCT). - comfortable eyepiece position (but again same in the dob mounted newtonian, and looking near zenith is easier with dob mounted newt). SCT cons: - might suffer from focus shift (since it is focusing with moving mirror, mirror does not stay perfectly parallel the whole time so image can sometimes shift as you focus, particularly when you change focus direction due to slight backlash in focusing mechanism). Ok, I'm having trouble describing SCT (I'm really not a fan, and I never used one and I would personally choose dob instead) so someone who has one and likes SCTs should step in to complete lists.
  20. Yes indeed - it did occur to me that dark filaments are neither associated with Sh2-249 nor Sh2-248, but standing in foreground. I've read somewhere that such dark nebulae are usually part of molecular cloud complex, so I can't tell for certain if they can exist on their own (which would be the case here). On the naming - it is possible that objects get catalog number if they are studied by people building the catalog, and small features don't make the list for practical reasons - too much time spent on studying other more interesting object in that class.
  21. Yes very interesting idea - close to what I thought about examining both visible and NIR spectrum - one can easily say if star is in front or behind. We can use IR part of spectrum to determine distances of those behind, and also in the process estimate how much of the light is attenuated - that will tell us a bit of density / depth of dark nebula.
  22. In recent topic in imaging, we had brief discussion about dark nebulae, or rather naming of these objects, and in fact ways to distinguish them from regular "empty" space (in context of imaging, nothing fancy). That got me thinking - what would be ways to determine distance to object that does not emit light, but rather blocks it? So far, I've come up with some fairly basic ideas: 1. Star count If we assume that starlight is blocked by such objects in visible region of spectrum, one can take image of such object, calculate angular surface and count visible stars in front of it (if there are any). Then from data available on average density of stars in our surrounding, we can calculate expected number of stars for a given volume of space (angular size of object + distance to it). By comparing expected number of stars visible (in front of the nebula) to that of what we see - we can figure out the likely distance. This of course depends on transparency of such object - some of the bright stars behind could shine thru with certain level of attenuation. More complex model involving expected stellar magnitude could be devised to account for transparency (based on density / size relations). 2. Spectral analysis I'm not sure about this one, but we could maybe determine thickness of such object by examining extinction in visible spectrum vs that of infra red (or rather magnitudes). From IR part of spectrum we can deduce stellar class, and associated absolute magnitude and then figure out based on distance in IR - attenuation. Applying some sort of density formula we can approximate thickness of object. Rest is up to morphology of such cloud and gravity simulations to see likely shape. If we can find shape / dimensions - we can find distance Anyone has any insight in this topic? Or maybe some other ideas worth discussing?
  23. You are right, it is indeed dust region. Not sure if it can be classified as a single dark/absorption nebula. I was going to give you incentive to pursue this topic further by examining one of two possibilities: 1. either it was "cooled" part of super nova remnant - which was in my view highly unlikely (why would there be only few parts of it cooled enough?) 2. It is part of some molecular gas cloud/complex near by that is closer to us. It turns out that option two is in fact true. Sh2-249 lies very close to IC443 (also known as Sh2-248). In Stellarium it is wrongly marked as reflection nebula and shares IC catalog number of 444, which is not true as it is Ha region. IC 444 is indeed reflection nebula a bit further away and not the same thing as Sh2-249. Central part of that region is filled with dark filaments. This APOD image shows complex and neighboring IC443 If we find any data on distances to both objects, and Sh2-249 turns out to be closer to us, or they are roughly the same distance (meaning Jellyfish is remnant of the star that was embedded in molecular cloud), then it is very likely that obscuring dark absorption part of Sh2-249 is in fact what you see in the image. Jellyfish is roughly 5000ly. Let's see if we can find info on Sh2-249. According to Simbad, distance to Sh2-249 is about 6556ly +/- 489ly. Ok, not sure what to make out of this.
  24. Because glowing gas is not uniform ball. It has internal structure, and as such - it has filaments that are stretched out and areas of very low density. Don't capture enough signal - it just appears dark. There is probably faintly glowing gas that is transparent there as well, but it is not type of gas/dust that is blocking the light from behind (probably). Let me find you another example of super nova remnant. That way you'll see what I mean. I could be wrong. This is not firm statement on my part - just trying to reason what it could actually be. This is also SNR - Abell 85, we could similarly argue that dark regions in it are in fact dust clouds in front of it. Here is better one as an example, look at this image (Simeis 147): I've marked something that could well be dust cloud sitting in front of nebula - it looks about right. Same feature in this other image does not look like cloud in front of nebula anymore: Nor in this one:
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.