Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

vlaiv

Members
  • Posts

    13,026
  • Joined

  • Last visited

  • Days Won

    11

Everything posted by vlaiv

  1. That does not look particularly good - that large excursion in DEC. It looks odd - it could be due to wind (large scope?) or cable tug/snag - or something else. It shows that you have backlash in DEC or calibration was not good enough. In fact cal is yellow in status bar - meaning that it would be best to re run calibration. Fact that DEC is not responding after numerous guide commands means that it could be backlash, or it could be coupled with backlash (either guide pulses are too small or they are right but are clearing backlash instead of making corrections). What mount are you using? If you want to get smoother looking graph - try using longer guide exposure.
  2. Have you tried Astrometry.net? It is also available for local install and you can even build your own indices for plate solving. You can download catalogs that contain stars down to mag19 and use them to build indices for plate solving (maybe annotation as well - not sure about that). Have a look here: https://archive.stsci.edu/prepds/atlas-refcat2/
  3. One of the points of advertising is to form interest where there is none and for that reason - you will sometimes get stuff that you are generally not interested in. Maybe it will spark an interest in you when you see advert for it. A lot goes into algorithms that select what should be displayed to you, and we can't just simply conclude that content is served based on what is "good" for you. At some point algorithm may even serve you some content for which it previously established that you have no interest in it at all - simply to do survey of how often people change their minds based on vast number of parameters.
  4. No. Position of histogram peak is related to this, but not as simply as 1/3 or some other rule. People are often afraid that they won't make a good image if their histogram is too far left, or too far right, but in reality their image will be just fine (with adequate processing of course). Only difference between few longer subs and many shorter subs lies in amount of read noise of the camera, or specifically - how level of read noise relates to levels of other noise sources. Once read noise source amplitude becomes small in comparison to other noises - difference becomes too small for all practical purposes. What does that mean? Well, depends on what camera / filters / sky conditions do you have. If you have high light pollution - LP noise will fast swamp read noise and you can use shorter exposures. Same if you have non cooled camera - thermal noise will overtake read noise rather fast. If you have cooled camera and you do narrow band imaging (that cuts down LP significantly) - then it pays to go really long exposures like 20-30 minutes per sub.
  5. Things are not as simple as that. There are couple of magnitudes reported for DSOs - and in general, they don't really tell you much. First there is total magnitude. That is total integrated brightness of DSO: It is number marked by arrow in Stellarium. Then there is surface brightness number - sometimes expressed in mag/arcmin^2 and sometimes in mag/arcsec^2. Luckily there is easy conversion between the two - just add 8.89. In above image surface brightness is 19.89 in mags per arc second squared - or 11 mags per arc minute squared. Problem is that these numbers generally don't tell you much. If we take 7.5 magnitude of this object - it would seem that it is fairly bright - you can see 7.5mag star with even small telescope, but with DSOs you need to account for their size as that light is spread over some surface. Larger the dso - dimmer it is for same total magnitude. This is why we have surface brightness - that tries to avoid above problem and gives us idea how bright object is per unit surface. This is rather good measure for objects that are uniform in their illumination - but sadly there is only one such object (and even it has gradients) - background sky. All other objects show some variation in brightness. For example - let's take a look at M51: It says mag21.45 as surface brightness. But is it really? Have a look at this diagram: It is rough surface brightness diagram of M51 that I made from one of my data sets. It is luminance brightness diagram. In it brightness ranges from mag17 to mag26. If you want to capture tidal tail of galaxy you need to achieve good SNR at mag27. You see now that mag21.45 is in fact something like average value of brightness values of the galaxy. Now the final question - how "deep" can you go? That really only depends on how much time you have. More time you spend on target - deeper you will go. In principle you want to get SNR>5 for interesting parts of the target to render it nicely. That means that you need to get signal to noise ratio of 5 or more. You can calculate the rest - take magnitude that you are interested in and see how much photons it is producing per second and per exposure, ~880000 photons per second per cm^2 on top of the atmosphere from mag0 target, account for atmospheric extinction, optical losses in telescope, clear aperture of telescope, quantum efficiency of sensor, pixel size, focal length - and hence sampling resolution, level of light pollution, read and dark noise and sub duration - times number of subs and you will roughly get what sort of SNR you can expect in what amount of time given your setup and conditions. Most galaxies that we image go down to mag29 - 30 in faintest parts. Not sure about nebulae. As you see - this question is rather complex, but there is very simple way to go about it. Interested in imaging particular target? Look for results that people got with similar setups in similar conditions - pay attention to total imaging time. If you can replicate it or spend more time with better gear - it's worth having a go. With time you will get the sense of how bright / faint things are and you will be able to decide without looking for reference.
  6. If you want good low power / wide field eyepiece, there are couple of things to keep in mind: - with low magnification, you will get larger exit pupil. Depending on your age, larger exit pupil might be wasted. Young people have pupils that will dilate to 7mm or there about, but with older age that becomes less (6mm, then 5mm, ...). If exit pupil of eyepiece is more than this - light is wasted and you get what is effectively setup with smaller aperture (less light grasp). Keep exit pupil up to 5mm or so - sky brightness. Again with small magnification light from sky background is less spread around and sky is brighter (just use any higher power eyepiece and you will see how sky turns black rather quickly compared to medium/low power eyepiece). If you live/observe in areas with light pollution - again higher magnification is preferable. - with some long focal length eyepieces, eye relief can be too much - this can mean that eye positioning is not easy and can cause blackouts and other artifacts. - true field of view is limited by barrel diameter. This means that eyepiece with 45mm focal length and one with 30mm focal length can happen to show you same amount of sky if 30mm eyepiece is wide field (wide enough). To establish differences you need to use tools that are available like https://astronomy.tools/calculators/field_of_view/ Here is comparison between the two eyepieces in 130mm F/7 scope (I guess you have something similar). - 30mm ES 82 will have smaller exit pupil at 4.3mm - you are safe with exit pupil size regardless of age, while Masuyama 45mm gives 6.43mm - not as good if you are older - 30mm ES 82 will give x30 magnification vs x20 of Masuyama 45mm - darker sky background and better contrast - 30mm ES has 22mm eye relief - and M 45mm has 32mm - I think you are safe with both there (maybe 32mm will be too much for someone). - 30mm ES 82 will sport a bit wider true field of view - it will show same or even larger extent of sky, so you are no loosing anything there. Only thing that I personally don't like about 30mm ES82 would be the fact that it weighs at 1Kg - that is just massive EP - I'm quite happy with my ES68 28mm for wide field needs.
  7. There is no much OIII in this target and you have captured it. If you have 80 x 5 minutes in both OIII and Ha that would make it 6 hours and 40 minutes in each channel - that is plenty to make good image. Since you attached 1600 x 1200 image - that means that you don't mind smaller image. You can boost your SNR by binning your data. Bin x2 will give you x2 boost in SNR and about 2200 x 1700 image, while bin x3 will give you x3 boost in SNR and something like 1500 x 1100 image. You can choose which one is better depending on how large you want your image to be. Next - you can use Ha layer to be luminance layer and use Ha+OIII just for color. Background is a bit too light - it can be darker and there are noise reduction artifact - maybe ease a bit on noise reduction.
  8. For 130P adaptation to photo tripod - have a look here: Ball head and small vixen clamp is all you need.
  9. Do you have first hand experience with both scopes? I'm finding it hard to believe that such scope at that price will beat ST80. Difference in price between two already very cheap scopes must account for better focuser and general fit&finish of tube and retractable dew shield. That means both scopes use same or similar glass types (no exotic glass at that price) and both are F/5. How better can one scope be compared to the other?
  10. I've heard that it is so, and it better be at F/7.5 vs F/5 . Slower achromat with smaller aperture will certainly give better views, but longer the scope - less portable it is and requires sturdier mount. I would personally choose Opticstar AR80S vs ST80 almost always - it is certainly better corrected for color, has nicer focuser and retractable dew shield, and in general looks like higher quality product. It is however heavier at 3Kg and longer at half a meter of length (almost 60cm with dew shield extended). Add good diagonal, finder and eyepiece and you are already at the limit of what photo tripod can carry in alt az configuration with most travel heads. It will take up almost all budget so there will be no room for tripod az head to mount scope on. On the other hand ST80 is rather compact at 40cm, 1.3Kg and given that you can get it for less than £100 - there is some room to get Az head for tripod - so I think it's worth mentioning as an option.
  11. You are a bit tight on budget there so probably most reasonable option would be to go with compact / tabletop dob scope. I would otherwise recommend you something like this: https://www.teleskop-express.de/shop/product_info.php/info/p1151_TS-Optics-70-mm-F6-ED-Travel-Refractor-with-modern-2--RAP-Focuser.html coupled with something like this: https://www.teleskop-express.de/shop/product_info.php/info/p9334_TS-Optics-Tilting-Head-and-Altazimuth-Mount-for-photo-tripods.html Refractors are better travel scopes in general. More compact, will provide both low power views and high power views for occasional moon gazing and planetary. You don't need to worry about collimation and in all likelihood they will be easier to mount. Your budget is limiting thing here, and if you want to go refractor route - you will be looking at something like ST80 or above mentioned OpticStar achromat. Such scopes will be good low power scopes and for general observing and low power moon, but planets will be out of reach with them (only very low power) as they suffer from chromatic aberration - or false color / purple fringing that is particularly bad at high power / planetary viewing (high contrast targets).
  12. Nice image. Too much red in the background that I can't seem to find on other renditions of this target - it is probably artifact rather than actual feature in the image. Maybe try not to push the image that much? Btw, on topic of the Moon, situation is rather clear - it just acts as another source of LP, and it is same as imaging from large city center. If you otherwise have good skies - like mag21 or more, then these conditions (provided that the Moon was not directly in direction of target) - are pretty much the same as ones that many people (including me) have on regular basis - mag18.5 or so. On night without moon - it would probably take you something like 1h or so to reach same quality of the image.
  13. You really don't need to worry a bit about field rotation. For all intents and purposes - in duration of single sub, there simply is no field rotation that we can speak of. Any field rotation that accumulates over recording session is dealt with in alignment / stacking phase.
  14. See this post: Budget limited to ~ £900.
  15. Yes, sometimes log scale shows stuff better, but sometimes it is harder to read. I don't like log scale used for full well capacity - no way of actually reading anything meaningful off of it
  16. Not sure what would be misleading in that? Sensors have something called dark doubling temperature and indeed it ranges between 5C and 7C. Not sure why would there be issue with graph showing halving of dark current - it indeed approaches 0 asymptotically as one would expect for that case. In any case - good temperature is one that provides negligible increase in total noise. That happens when dark current noise is less than read noise (about couple of times less, if it is five times less then you practically won't notice it at all). Let's say that dark current is about 0.003e/s/px at -5C. In 300s exposure that is going to be 0.9e/px and associated noise will be ~0.95e. Typical read noise for this camera is about 2.5e. These two combined give ~2.674e noise, or only about 7% increase in noise over read noise - not much really.
  17. Yes of course, you can take LRGB / RGB type of images with mono camera and filters / filter wheel. It is however much more expensive and involved option. Having two cameras could actually be alternative - one mono for solar H alpha / lunar (sometimes lunar imaging can benefit from mono camera as there is no much color on the moon) and one color for the planets. When doing planetary imaging you have limited time window - like 4-5 minutes to do single session. This means that shooting with filters / mono you will have something like 1 minute per filter, or that you will have to switch filters every minute until you cycle them all. If you have manual filter wheel - that means a lot of wasted time. If filters are not par focal - again more wasted time. This is because when you want to switch filter, you need to. Stop your capture, manually turn filter wheel, wait for scope shake to settle, adjust focus if your filters are not par focal, then start another recording run. You need to repeat this procedure 2-3 times depending on whether you are shooting RGB or LRGB image. All of that can be avoided if you purchase par focal filters and automatic filter wheel - but for that sort of money, you can get a color camera instead (even simple filter wheel and set of basic interference filters will cost something like $200-$250, and color camera is not going to cost much more - you can get ASI224 for $250 from ZWO - not sure about shipping and import duty though). If you are really tight on cash - just go with color camera instead. You will still be able to shoot Ha with it. It will only use about 1/4 of camera's pixels (only red channel) - so you will need to barlow it a bit more than you would mono to get to critical sampling rate.
  18. Why not use planetarium software to see what is positioned good for particular evening? Depending on the way you image - you can think of a way to select your targets. For example, if doing single target per night - choose one that is crossing meridian about half way thru the night (if you image between 10pm and 2am - select one that is crossing meridian at midnight). If you image multiple targets with single filter on particular night - like 1/2h per target 2-3 targets per evening - select side of meridian that has the least light pollution and image targets in the order they cross meridian. If you prefer not to do meridian flip - again select target that will stay on one side of meridian for whole duration of imaging run (and choose meridian side based on light pollution levels). BTW - Stellarium is very nice planetarium software. So is Cartes du Ciel.
  19. Hi and welcome to SGL. Depends on your priorities. If you are going to shoot H-alpha solar, go with mono sensors, otherwise, you might find that OSC sensors (one shot color) are far easier to use. You need low read noise, good quantum efficiency and fast download rate CMOS sensor for best results. Best color sensor for the job is currently ASI385 (or other vendors with same sensor - I know ZWO cameras, so I'll list ASI models, but other vendors have cameras with these chips as well), followed by ASI224. If you plan doing Lunar and Solar, then size of chip matters - for planets all sensor sizes will be good enough since planets fit on even smallest sensors. In mono variety, you have following choices: ASI290 and ASI178. Choose one with USB 3.0 connection rather than USB 2.0 since it will provide you with needed download rates for high fps.
  20. For those that don't use PI, here is "little tutorial" on how to do color combination in Gimp. I created a gradient of red and green colors, always maintaining ratio of 10:1 (red goes from 0% to 10% and green from 0% to 1% and then it's stretched). Here is result of regular stretch: Same thing stretched without loosing color saturation: Second look less bright - but that is limitation of display rather than technique. If you maintain color you can't make it as bright as you like - you can't make red have value larger than 1, so green is limited to value 0.1 (if we keep 10:1 color ratio from my previous example). Here is procedure: Take RGB unstretched image and decompose into R, G and B channels (select decomposition in layers). Set each layer mode to 100% lighten (which is in fact max). Flatten this image - this is max of our three colors. Again decompose RGB unstretched image, this time produce three images - each of R, G and B instead of layers. Copy/Paste max that we created in previous step on top of each of these three. Set its layer mode to division 100% - flatten each. These are ratio images. Take luminance and do histogram stretch to your liking. Make three copies of this luminance (will be final red, green and blue). Now copy each of ratio images and paste into respective copy of luminance. Set layer mode to multiply 100% - flatten each. RGB combine resulting images.
  21. I don't want this to turn into debate on this topic, however I do think that there are certain aspects of photography that make it a bit more true rendition of reality than artist's impression of reality. Even we, as a community, sometimes debate - how much processing is "allowed" in order not to cross that documentary line of photography.
  22. It is due to non linearity of histogram stretch. Imagine we have a very simplified setup - source of light that produces 10 parts red for one part green light (matched to our camera - like I said very simplified setup). In one second our camera will record 10 units of red and 1 unit of green. In 100 seconds, we will record 1000 units of red, and 10 units of green. Even if light source is very far away - so that we register only 1/1000th of light in that arrangement - we will still record same R:G ratio of 10:1 - in one second exposure or in 50 second exposure. It is important to see that RGB ratio does not change for particular target if you change the intensity of light (either by star being further away, or using smaller scope, or using shorter exposure - whatever the reason). Now let's examine stretch curve that is non linear: Above graph represents non linear histogram stretch - it takes pixel intensity value from X axis and maps it to Y axis. I also added - pure linear transform (a bit less than 45 degrees - so it actually changes pixel values). Let's examine what happens to original pixel values and transformed pixel values. I tried to maintain 10:1 ratio mentioned previously - so on X axis we have two vertical lines representing two pixel values mapped on X axis - one is red and one is green. Red is about 10 times more than green (x10 further from origin). Then we have two horizontal lines for each of these vertical lines - one that represents linear mapping - vertical line goes to diagonal and then joins dotted horizontal line and vertical line goes to our curve on histogram - non linear stretch and then it joins regular horizontal line. If you look where dotted red and green horizontal lines intersect Y axis - you can see that Y values maintain 10:1 ratio (triangle similarity if my drawing is poor) - but look what sort of ratio have Y values that are result of non linear stretch - full horizontal red and green lines Y intersections. It is more like 3:1 now and no longer 10:1. Doing same nonlinear stretch on all channels at the same time will change color ratios. In fact it will change them in such way that will "bring them closer together" - closer to 1:1 ratio. 1:1:1 ratio of RGB is gray color - so whenever you bring colors closer to that ratio - reduce difference between them, you are loosing saturation. How do you keep color ratio then in your processing workflow? That is quite easy - you need luminance - and you stretch only luminance. For LRGB - you have luminance. For RGB you can either have synthetic luminance or use G as luminance - depending on target and type of camera used. Most DSLR cameras have G channel that is made to mimic human brightness response. If you want to get luminance as humans would see it - just use G channel of these cameras (again it is better to use G as luminance in OSC because twice as many pixels collect it than R or B - therefore it will have better SNR, but like I said that depends on target - if you shoot OIII+Ha target - you will be better off by just adding R+B). Once you have stretched luminance - you multiply with scaled R, G and B to get actual RGB values. For example, let's go with above 10:1. First we scale it. We scale it so that Rscaled = R/max(R,G,B), Gscaled = G/max(R,G,B), Bscaled = B/max(R,G,B). max(10, 1, never mind now, we are only using r and g) = 10. R = 10 / 10 = 1 G = 1 / 10 = 0.1 Now if we have luminance boosted by stretch to 0.5 - actual pixel values will be R=0.5, G = 0.05. If we have luminance boosted to 0.8 then R = 0.8, G = 0.08. If luminance is stretched to saturation point =1, then R=1 and G=0.1 - we still have proper ratio. This means that even star cores will not be white but will have proper color - common problem when using regular stretch - star cores saturate really quickly and become white. Above I outlined workflow that preserves color ratios - this is one part of getting proper color in image. Other parts include - doing color transform on raw color data (often called color balance), and later - doing gamma 2.2 on linear color data - because images unless color managed are expected to be in sRGB standard. Part of transform from raw/linear RGB values to sRGB standard is to apply gamma of 2.2 in the end (in fact gamma 2.2 is approximation, there is precise expression that is a bit more mathematically involved - but for nice image, gamma 2.2 is sufficient).
  23. Again - that is up to processing. Actual brightness of something in image hugely depends on level of stretch - much more so than few percent possible difference due to scattering - and in reality it's not even that much. I see the problem being as follows: - most people don't bother with color calibration of their gear - they just compose R, G and B to be color and don't understand properties of raw and gamma corrected color spaces. Most even do color composing wrong - they compose RGB while in linear stage and then apply same level of stretch to those colors. This all leads to lower saturation. Then they need to increase saturation to bring it back. - people that are good at this tend to do it longer and have higher end gear - either because they are ready to invest more into this, or due to length of time in this hobby - they accumulated enough valuable gear. But with time you gain experience as well and that means that you get good at making images. Point being - you don't color calibrate, you use high end gear (like mentioned Astrodon filters) - that impact certain "cast" to the image if you don't do proper color calibration - you make good images that people tend to take as reference work. This leads to setting "skewed" standard of what certain galaxy should look like in images. Over time less experienced imagers "grow up" trying to match popular belief of what certain galaxy should look like. But in reality we have just one way galaxy really looks like - since we are not dealing with colorful objects that you can take in daylight or under led lighting and can therefore change colors - we have light sources that always emit same spectrum - they have very defined colors (take same light and boost it enough to produce color response by human eye - and it will have definite colors that most observers will agree upon - there is no "artistic freedom" in that and choice how to render object - there is, but we can classify it broadly in two categories - right and wrong ).
  24. Why would reflector push more light in star halo and how would that impact color of the star? Why would filter be responsible for "deepness" of the red - it either passes or blocks wavelengths and some signal is recorded. It is up to processing workflow to actually assign "color meaning" to that recorded signal.
  25. Here is another update. I figured out that I have small CS lens that comes with guide camera - 2.5mm focal length. This lens is not suitable for my ASI178 camera in many ways (one of which I figured out during the test today) - first it is made for 1/2.5" sensor size, while ASI178 is larger - 1/1.8" format. This means that anything that lens picks up will be concentrated at center of the sensor with "extreme vignetting" (if it can be even called that). Second thing that I just found out - due to mistake, is that ASI178 cooled version requires C mount lens rather than CS mount lens. This means that only C-mount lenses can be used on this camera model. I was under impression that ASI178 cooled also uses CS - but that was my mistake as I did not bother to find specs for cooled model and assumed that regular non cooled model has same housing - not true. Here is diagram for 178 from ZWO website (marked with red arrow is 12.5 distance that corresponds to CS flange distance): Here is diagram for cooled version. My version is earlier than this one as well - it does not sport removable 2" nose piece like in this diagram, but distances should be the same: Again - marked in red are important dimensions - 11mm + 6.5mm = 17.5mm - flange distance for C-mount. This means that my camera / lens could not be focused to infinity, and was in fact focused to much closer distance (like macro mode with regular lens - if you move lens further from camera it can reach focus at closer distances). This in turn means that eyepiece was not operating as it should - properly focused - producing collimated beam. In any case, here is result of experiment: It of course has severe vignetting - both in illuminated zone (this is due to 32mm EP having 27mm field stop and scope having 21mm rear baffle) and huge area of sensor being completely in the dark - that is because of 1/2.5" lens used on 1/1.8 sensor - only 6.4mm diameter is going to be illuminated out of 8.9mm diagonal sensor - that is inner 72%. Reduction in focal length is much less than I expected. Something like x0.8. I have no idea why is this. Maybe because lens was focused so close instead of at infinity? Another thing is that regular pixel per angle formulae probably don't give correct results for such short focal lengths as 2.5mm? I was about to order 8mm fixed focal length lens for 1/1.8" and one varifocal lens 4-12mm for 1/2" sensor - I simply can't find suitable varifocal lens for 1/1.8" sensor in hope that one of them will be suitable, but now I'm having second thoughts about this approach. I'm not happy with results, but on the other hand - setup was not as intended - CS lens was used with C mount and possibly quality of result is hugely dependent on this. I will also need to find some sort of "short" CS thread / T2 adapter in order to make everything work. I actually needed to push lens even further about 1-2mm in order to have enough T2 thread to make everything via threaded connection. Back to so far preferred method - focal reductor. As luck would have it, I had a chance to test this setup with artificial star . Sun was in such position that something on TV tower gave reflection for brief period of time. This allowed me to evaluate "spot diagram" of center and edge of the field. I was planning on doing two tests - threaded connection for focal reducer and reversing the lens (because I reversed it once before and I wondered if it would work better with original facing) - but as this artificial star opportunity presented itself - I took it to see what sort of aberrations I can expect. First center of the field (well actually around 1/3 off axis): This looks rather good - it is round and not overly large. I would be happy of having stars like that. Seeing was rather poor, and this is best frame out of 15 or so frames I took in fast succession. Now edge of the field performance: We no longer have single point but rather a streak in direction of center. This looks like tangential astigmatism, but as we will see later, it is very much to be expected in this setup. Other corner: Same thing, again - in direction of center of FOV. I would be rather discouraged by this if it was not for the text I read few days ago: https://www.telescope-optics.net/miscellaneous_optics.htm (section on focal reducers) here is spot diagram for cemented doublet focal reducer: We are interested in FLAT FIELD part of this spot diagram as we are using sensor that is flat. And a small reminder - above edge of the field is at about 0.5 - 0.6 degrees. I think that above spot diagram almost perfectly matches what has been recorded. For better results than this - one should have better corrected / matched reducer (or perhaps slower lens corrected at infinity?). I'm about to test second proposal - using slower lens with larger clear aperture to see if it will make a difference. Reducer that I'm currently using is 1.25" reducer that has about 25mm clear aperture and about 102mm of focal length - making it F/4 lens. Since scope has T2 connection - reducer that has up to 40mm can be used, but there is no one on the market. However - we need doublet corrected at infinity? How about 30mm finder? That should have something like 120mm focal length and have 30mm (again F/4 lens) - but we could stop that one at 25mm and with 120mm we would have F/4.8 lens? Or maybe keep full aperture and see the impact on vignetting. In any case - I don't believe this option is in line with original idea - having do it all scope with easily accessible parts as it would require a bit of DIY.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.