Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

vlaiv

Members
  • Posts

    13,107
  • Joined

  • Last visited

  • Days Won

    12

Posts posted by vlaiv

  1. Actually, I think that SA can be used to do that level of spectroscopy, although it needs to be "modified" quite a bit. Actually, grating is not modified, but resolution can be improved by following:

    1. use of slit

    2. putting grating in collimated beam.

    SA200 for example, with 200 l/mm and clear aperture more than 20mm can in theory produce resolution of R4000 (and that is about 1.5A in visible). Problem is of course that we tend to use it in converging beam and limiting factor is star FWHM as well. Addition of collimation optics (eyepiece and lens for example as in topic I linked in above post) provides collimated beam. Size of the beam (or rather its diameter) will depend on speed of optics and focal length of eyepiece and is equivalent of exit pupil in observational use - which means we can easily have something like ~6mm (maybe a bit less with slow scopes), and 200 L/mm will give us R1200 theoretical maximum from grating, which is enough for classification at about 5A (if I'm doing my math correctly).

    20um slit placed at field stop of eyepiece with reduction factor of about 1/3 - 1/4 will be "pixel wide" (with perfect lens and that lens being about 1/3 to 1/4 in focal length of eyepiece used), so we end up with sampling being limiting factor. 20um slit is both hard to make (as far as I'm aware - no diy there so needs to be purchased) and requires certain focal length to make star large enough so that it is about FWHM wide (you want most of the light from the star to be in slit, but keep slit narrow enough).

    @robin_astro - does my rambling above make any sense?

     

  2. 4 hours ago, George Gearless said:

    @vlaiv, @wimvb

    All my equipment is very much stock. Expecting anything better than a 'stock result' is propably being hopefull.

    I did enter the the guidescope and camera info into Stellarmate. But considering I am using a Orion miniscope (9x50) as guide, and a ZWO 120 Mono camera, I think it is more likely that my system is unable to properly determine the error.

    I'm a complete newbie at using a guidescope. And using Stellarmate overall, for that matter. So I was just trying to figure out what kind of error margin I should expect. I simply had no frame of reference.

    While there is still a long way to go, your replies leads me to believe that I am at least on the right track.

    Thanks both of you.

     

    Stock mounts vary quite a bit in guide performance - some are smoother than others, but I would expect on average EQ35 class mount - which is probably closer to EQ5 than EQ3 in guide performance, to have something like 1-1.5" RMS guide error.

    One could do sub 1" RMS on such mount if mount is smooth and conditions are particularly favorable on a given night - meaning 0 wind and very good seeing conditions.

    Orion miniscope has 162mm focal length (if I'm not mistaken) and ZWO 120 mono has 3.75um pixel size. Together that gives 4.77"/px of guide resolution. Centroid calculations are able to determine star position to about 1/16 to 1/20 of single pixel (depending on SNR) - which means ~0.25", and RMS calculation should be precise if it is larger than about x3 of that, so if RMS figure is larger than 0.75" then it can be "trusted" (it is about right) - smaller than that, I'm simply thinking that you won't be able to get accurate reading.

    Mind you, that is enough precision to guide such mount on stock values (meaning about 1"-1.5" RMS), but does limit your imaging resolution to about 2"/px.

    Another way to tell if your guiding was really that good is to examine subs you've taken while guiding. What sort of star shapes did you get? Are stars distorted in any way - for example not round, maybe elongated in one direction or egg shaped? If they are round - which means that guide error was uniform in every direction (good thing), next what you want to look at is FWHM of stars in arcseconds. This value is true measure of achieved resolution on the image, and if that number is comparatively small (depends on guiding, scope size and seeing on that night) - then yes, you were indeed guiding really well.

    4 hours ago, spillage said:

    I always thought that you guiding should be half of you imaging resolution. Be nice if I am wrong as I always thought my quattro 8 and 1600 are pushing the capabilities of the neq6 a bit too much.

    Yes, that is quite good general rule of thumb to have guide RMS at least half of imaging resolution or smaller (smaller is of course better, and larger is worse, but it will not "ruin" image - it will just look a bit blurrier).

    • Like 1
  3. 8 hours ago, George Gearless said:

    As far as I could tell, it never exeeded 0.5 of an arcsecond. Is that a reasonable level of accuracy for AP?

    That is superb result if true. I'm a bit doubtful if it is indeed true.

    There can be number of reasons why this number is not correct.

    - first check if you entered focal length of guide scope and pixel size of guide camera correctly into guide program. This is needed to properly calculate arc seconds per pixel.

    - It can be the case of guide system inability to properly measure guide error. What guide camera and scope are you using? You need to have enough resolution in guide system to properly measure guide error. If system is lacking resolution it will report much smoother guide errors - a bit like estimating height of people with 1meter stick (without cm subdivisions), it will either be 1m or 2m and will look "smooth".

    In any case, as wimvb mentioned, imaging resolution plays a part in it as well, but if you really guide at 0.5" RMS (or there about) - that is really excellent result for EQ35 mount (that is excellent result for even heavier more expensive and precise mounts).

    • Thanks 1
  4. At the time I was purchasing - I knew almost nothing about all of it, and since that option was in stock, and it seemed better because higher dispersion - I purchased that one :D

    As for suitability - there is nice spreadsheet floating around (you can find it in some threads here on SGL as well) that does decent job of calculating expected resolution depending on focal length of the scope, field curvature, angle of the beam (unfortunately, using grating in converging beam produces aberrations), expected seeing. Maybe I have it somewhere downloaded and I can attach that for you, let me see.

    Yes, of course, have it in my "docs" section, so here it is:

    TransSpecV3.1.xls

    You can play around with it and see what setup would give you best results. Just pay attention that you will be using color sensor, so sampling rate will be affected, and also, you can stop down your scopes to make slower beam - that can help.

    SA200 might be better for slower scopes as resolution depends on how many lines beam intersects (area of the beam at the grating). Larger sensor with large pixels helps with SA200, but you can always bin your data. SNR is not something to be overly concerned about if you are not time limited (like for transient events or real time display of spectrum), guiding will help of course.

    I think that SA200 is also better in "advanced" configurations - like making collimated beam with eyepiece - grating - lens - camera configuration. There was a thread recently showing this configuration. I plan to make something similar one day - with addition of "half slit" sort of thing - half of FOV clear and half blocked with slit. Upper part will be used for plate solving / centering, and part with slit will provide added resolution and proper background subtraction.

    Btw, here is that thread:

    In first go, with just grating and nothing fancy, I plan to use it on RC8" with ASI1600. I did calculations with above spreadsheet and I should be able to get something like R300 in favorable seeing conditions. Of course, I'll be happy with any sort of decent spectrum, but going a bit higher resolution with SA is one of my goals.

    • Like 1
  5. It should be fairly easy and even automated process ...

    Once you extract the spectrum of star - you can compare it to reference spectrum for each class - one that gives best match is your class.

    I think a lot of fun things can be done with just stellar class identification. One of my goals with this, once I embark on this journey as well (SA200 is ready and waiting for suitable time / weather) is determining distance to a star.

    Maybe even double star or small cluster. With more samples you get better "precision", but process is fairly easy - get spectral class, get expected absolute magnitude from that, and do a bit of photometry to get relative magnitude and from that - distance.

    • Like 1
  6. Reasons that come to mind:

    1. holding the shape - rigidity (carbon fiber might bend under its own weight for example).

    2. ability to be polished to needed degree (ease of figuring it)

    3. Thermal stability (but also thermal expansion, some glass types are better at this then others - have expansion factor close to zero)

     

  7. That is kind of interesting, since your setup should not be that heavy. I've found that I need third counter weight with 12kg+ setup - which I would not recommend anyway on Heq5. I did use it like that, but it was at its limits (8" F/6 tube from dob + rings + camera + 60mm guide scope and guide camera).

    Esprit 100 is listed to weigh at 6.3Kg. ASI1600 is fairly light weight camera. Not sure how heavy is your filter wheel. Guide scope should also be light.

    I balance ~9kg scope (RC 8" + 50mm M90 extension and upgraded focuser), ASI1600, filter drawer and OAG and additional 1Kg weight to setup in DEC with 2 5kg counter weights without a problem. Counter weights don't even go down to the end of the shaft.

    Maybe you can optimize your setup a bit? In the same way that you can get disbalance by moving CWs up and down the bar - you can do the same on scope side. If your guide scope is mounted on top of your imaging scope, can you change the distance between the two - can you get your guide scope closer to the imaging scope? Where is your filter wheel "pointing"? If body of filter wheel is rotated so that it is "up" or away from mount - bring it down so that it "hangs" rather than being "up" (silly usage of directions, I know, but hopefully you will get what I mean).

    Btw, it is better to add more weight closer to mount then use extension bar as far as guiding is concerned. Setup will be heavier and there fore more solid, it will indeed produce more friction on bearings, but arm momentum will be smaller and mount more responsive to guide commands.

     

    • Thanks 1
  8. No idea how important 3 star vs 2 star is for accuracy, not if there is any difference between Alt-Az and EQ in needed stars for alignment, but maybe usage of the scope could provide some clues?

    AltAz is not meant for imaging, hence there is no great need for alignment precision as one won't be working with small FOV and accurate finding / tracking of objects. Maybe rationale is that for visual, people will use different eyepieces - wide field ones once scope moves close to target and then adjust position with hand set or higher magnification views? So it could be the case that 3 star alignment is offered on EQ mounts for moments when you need greater precision - like when working with relatively small FOV (largish focal length and smaller sensor size) - so that you can still be on target after slew?

  9. I'm in favor of using all the data, but for that to happen, you need some clever algorithms.

    I know that PI has sub selector and that it can assign weight based on estimated noise. So maybe you should try that.

    On the other hand, having worked on algorithms for exactly that purpose, I can tell you that there is no single weight for entire sub that will produce optimum result. Combining different SNR sources requires weight per SNR and SNR depends on both signal and noise, so measuring noise in one part of the image will not give you complete picture of SNR across the image and how to combine different subs together for optimum result.

    Have a look here for actual comparison of rejecting subs vs stacking them with different weights (but my approach is different than PI sub selector so you might get different results if you work with PI):

    I also have part of stacking workflow developed that sort of deals with LP gradients. Let me see if I can find that thread as well. Here it is:

    I have efficient residual gradient removal tool as well :D

    If you wish, you can post your subs (or put them somewhere online like google drive or such) and I can put them thru my workflow to see what we can come up with in terms of end result

  10. Can you check following: What is bit precision on that .png file that you are downloading? Is it 8bit or 16 bit?

    I can't tell if SGL website engine is doing anything with attached image (like optimization for web display). If it is not - ones that are attached above are 8bit, which is very very bad thing. You want 16bit format for your subs.

    Also, what sort of gain / offset are you using when capturing?

    If you have issue with attaching .png file so that it stays the same (unaltered by SGL engine) - take one sub of m31, put it in zip archive and upload it.

    Btw, it is better if you capture your subs in fits format. It is format meant for that, so it will contain all the capture parameters (provided capture application writes those along with the image, and it should).

  11. 1 hour ago, Datalord said:

    yes, in an ideal world. My own experience tells me the downsampled flats work great.

    It's not something that is hard to do and you need to take shortcuts because of that. Why would you do it differently if it is a proper way to do it?

    It really depends on characteristics of the sensor. Flats don't correct for dust and vignetting only, they correct imperfections in QE on pixel level as well. For example, look at this flat (crop and stretch to show issue):

    image.png.62b04de63fa4f8c334d7f5f91b1070ba.png

    In left bottom corner there is a "small" dust doughnut here cropped to 1/4 of its size (just to explain why there is ring there and for size comparison), but important thing is checker board pattern in flat. That is pixel to pixel variation in QE due to manufacturing process - maybe electronics between pixels or shape of micro lens - it does not matter. What matters is that there is a bit of QE difference on pixel scale.

    When you downsample such flat, unless you are very careful in the way you do it, you will introduce correlation between pixel values and you will no longer have true representation of pixel QE levels.

    Difference between pixels is something like 30ADU per 1600 ADU, so it is ~1.9%, not much, but I would rather avoid additional noise that would come out of messing up per pixel QE with downsampling if I can - and in fact I can - just by following above rule (which again is not really any harder to do than downsampling flats).

     

  12. Could you by any chance upload fits straight from camera of that M31?

    That looks rather "ok" if it is unstretched.

    Quick stretch of 8bit data from the image you attached looks quite ok:

    image.png.59c71c88e52bf6be6723192918439a9d.png

    Could you give more info on the kit used to image this?

    What is the focal length of your scope? This was ASI1600MM but without cooling (I mean camera is cooled but cooling was turned off or is it model without cooling?), if so what settings did you use (gain, offset).

    Just for comparison - here is 60s sub I've taken with 80mm scope and ASI1600 cooled at -20C also unstretched:

    image.png.5f749b3c152c8c0ce010a73df9e0e0ae.png

    Not much to be seen either. Even when stretched it is not much better, and certainly not as good as your 5 minute sub even at 8bit:

    image.png.aca18ba27525e0028352131ad42079ee.png

    I'm just wondering how much of what you see in the image is due to way image is stretched, and if you don't have any issue at all with scope and camera and just need to bring images to equal footing to compare them.

  13. 1 minute ago, knobby said:

    Absolutely agree but dabbled as I thought that using the same data for G and B would be very Bi coloured so tried mixing.

    Any sort of mix of only two sources will be very Bi colored :D

    With different mixes of colors from two sources (like percentages per channel) - you are just changing hues of two colors that compose image. You need some fancy way of fiddling with your data in order to produce tri color image from only two sources.

    One of the ways to do it would be - assign percentage to channel based on intensity, so that not all pixels contribute in equal measure - for example if pixels are over some threshold value - switch channel you are assigning values to, or similar.

    That would in effect mean, for example - strong Ha signal is mapped to yellow (has both red and green contribution), while weak Ha signal is mapped to red (no green contribution), OIII signal is mapped to blue. That would create "tri color" image from bi channel data.

    • Like 1
  14. 3 minutes ago, Paz said:

    The idea of making sure you don't surprise anyone  is something  I hadn't thought of.

    In winter I'm trussed up in black from head to foot - warm army boots on my feet lots of layers that make me look 3 stone bigger than I am and sometimes 2 balaclavas on top. I wonder who would scream the most in a surprise encounter, me or a thief!

    That adds another layer of complexity. You now need to worry about neighbors mistaking you for a burglar (with those balaclavas on your head and dark outfit :D ).

    • Haha 2
  15. 14 hours ago, Greymouser said:

    I used to have a neighbour a few doors down, who used to like gardening after dark. ( Not Ozzy Osbourne either! ) He was quite an odd fella and used to feel insecure, because of the neighbourhood, so used to keep an ice axe nearby the whole time! A bit extreme I though and dangerous, for several reasons, especially as it is not all that bad round here, in respect to crime, assuming you can ignore the drug use! Which is why I cut my observing session short tonight, as my neighbour lit up her nightly spliff of Skunk canabis, I have no wish to share it with her! She has got kids too.  :rolleyes2:

    The axe toting fella is dead now though, he got cancer which finished him soon after retiring. Turns out it wasn't an axe he needed for protection, but a lifestyle change perhaps. :sad:

    If you were in the open (as one would be for observing session) - you really need not worry about any effects of odd spliff being lit up near by.

    There is simply no chance it will have any sort of effect on you - both on your health or state of mind. Concentration of "active matter" reaching you is basically 0 (fact that you can smell it tells something about how sensitive our nose is - it needs very low concentration of molecules that we identify as smell to smell something, and active molecules (THC / CBD) are not the ones you smell).

    Unless you are bothered by smell of it, or have any other reason to retire from observing.

    • Like 2
  16. 9 minutes ago, sshenke said:

    just to add, i started imaging within about 20 minutes of getting the scope outdoors, not sure if this is enough time for dew to build up. have had this scope and mount only since this summer, so don't have any previous winter experiences to compare with. 

    Don't think 20 minutes is enough for dew to start forming. It does depend on how your scope was stored, but if it was not already close to ambient temperature (practically being outside, or at least in a wooden shed / garage that provides little to none warmth compared to outside temperature) it will take at least that long to bring scope close to ambient, and I'm sure it will not have time to cool down more than ambient (that is needed for dew to really start building up).

     

    9 minutes ago, Adam J said:

    Could be ice on the sensor. Really you need to post some images. 

    I would say that ice on sensor is ruled out by this:

    1 hour ago, sshenke said:

    camera was not cooled on either of the session

     

  17. 8 minutes ago, Ken82 said:

    I’ve now taken my flats at 1x binning the same as my other lights and calibration frames. 

    That is something you should always do, regardless of any binning applied (before or after) - use exact same settings for calibration files.

    • Like 1
  18. It would be very helpful if you could describe in which way was the image poor compared to previous attempt. Maybe even post images for comparison?

    If all things equal, I would say that 300s image can be poorer because of cumulative guide error / seeing so it can be blurrier than 15s one. Although you say that your guiding was good (any chance of guide graph screen shot, or at least RMS figures? Maybe guide log from your session? What do you guide with?) it could be the case that 300s is true picture of what your guiding is like, while in 15s you don't really see effects of guiding/seeing- too short exposure.

    If image quality suffered in signal department - not enough detail visible in 300s exposure vs 15s exposure, it can be due to number of reasons - level of stretch (how do you examine your images, is there auto stretch applied?) - in this case 300s is in fact better sub but you are not seeing that on your screen, or it can in fact be due to dew - it will very much kill any signal to the lowest possible level (only very bright stars) and hence SNR.

    You can examine if dew is the case by looking at subs across your session - there should be evidence of things getting worse (unless the scope managed to dew up before you even started taking subs).

  19. On one hand, I think it is down to mental attitude.

    For some strange reason, I've never been afraid to walk alone at night anywhere in my city. A lot of people that I know have some degree of unease if not fear of going alone at night.

    It is not case of "it can't happen to me", nor do I live in peaceful city, far from it, bad things do happen at night - it is more that I don't ever think about it, it simply never occurred to me that something might happen. Even when I consciously think about it like now - it simply won't generate that sort of worry or fear next time I need to go somewhere at night alone.

    That of course is a subjective thing, and far away from objective dangers (but it does help to be relaxed if you decide to go out observing).

    I was going to propose carrying small radio / usb player and listening to some music or radio shows quietly so you don't disturb other people. Some threats out there are best avoided if they can spot you first, much like in the wild. People trying to steal something will like to stay concealed and they will avoid you if they are aware that you are there. Last thing you want to do is to surprise them, and it is easy to do so as you are sitting there in absolute dark without moving :D . That is when they can act in violent manner (more because they are surprised then because you pose any real threat to them).

    Problem is that above behavior draws in another kind of danger - those people with aggressive demeanor, having had too much to drink and generally looking for trouble or someone to bully and act out their own frustrations. These sort of characters will be drawn by sound. Even worse, people looking to rob someone will do the same if you look vulnerable enough in their eyes, and again it does not help that you are sitting there in absolute dark, not moving and minding your own business.

     

    • Like 3
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.