Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

vlaiv

Members
  • Posts

    13,106
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. I have to say that this is beyond me ... I have no clue what might be causing this - it is possible that flats are to blame. You said that you used old flats? If you remove camera from the scope or change focus or anything - old flats are not going to work. Also need to do proper calibration - which includes darks or at least bias. I tried fixing things, and there is simply no way to fix the above. Concentric rings look like some sort of issue with vignetting - possibly due to use of old flats and wrong calibration. Interestingly enough green channel does not have this issue - only red (I did not check the blue one).
  2. Ah, I need to correct myself first - that is the second silly thing I've written on SGL today, it is indeed time for a holiday .... 80mm F/5 will provide a bit wider fields than 70mm F/7.1 - simply because it has 400mm focal length and not 500mm like I written in my first post - 80x5 =400 and not 500 like I written for some unknown reason Everything else is ok in my first post apart from that. As for filtering, there are number of filters that you can use, some cheap and some more expensive. In cheap range, simple yellow wratten #8 (light yellow) will give good results on reducing blue/purple halo that is most noticeable in CA. It does impact a slight yellow cast on the image. On expensive side, there are different "semi apo" and "fringe killer" filters. I've got Baader Contrast Booster filter and it removes much of purple halo on my 4" F/10 scope. There is another thing that you can do to reduce chromatic aberration of your achromat refractor - you can actually "slow" it down. If you for example have 80mm F/5 scope, you can easily turn it into 50mm F/8 scope that will have color index about 4 in above table and have minimal levels of CA. You do this by use of something called aperture mask - it is simple mask that you can do yourself out of cardboard or piece of plastic - with a central hole of wanted diameter (smaller than diameter of the lens). This will make image a bit more dim and reduce maximum usable magnification (again it will be x2 the aperture, so for 50mm aperture mask it will be x100) but it will clean up CA nicely. Have a look here for example: https://10minuteastronomy.wordpress.com/2017/02/11/why-and-how-to-make-a-sub-aperture-mask-for-a-refractor/ On the other hand if you purchase something like ED80 above - you won't need to bother with any filters and aperture masks. ED80 will have much lower levels of CA (you will be hard pressed to see it even on highest magnifications) than you can get with either filters or aperture masks. Do take into account that it will be quite more expensive route than mentioned ST80 or 70mm Mercury even if diagonal and EPs are included, although it is offered at very low price for such scope, as you will have to purchase an astronomical mount for it as well.
  3. Hi, and welcome to SGL. Actual FOV depends on couple of factors and is in principle related to F/ratio. Maybe easiest way to think about it is via magnification and apparent FOV of eyepiece. Eyepieces have apparent FOV (AFOV) and most likely you will get eyepieces supplied with your scope that give you about 52 degrees of AFOV. Eyepieces have focal length which is used to determine the magnification that they give with any particular scope. To obtain magnification - divide focal length of scope with focal length of eyepiece. To get true field of view - divide AFOV with magnification. Let's do an example - let's compare 25mm 52 degree eyepiece in two scopes that you mentioned 70mm F/7.1 scope (500mm FL) and 80mm F/5 scope, again 500mm FL. 25mm EP will give same magnification in both scopes - because their focal length is the same and it will be 500mm / 25mm = x20. You have 52 degrees of AFOV, and divided with magnification that gives - 2.6 degrees of true field visible in both scopes. Although scopes are of different "speed" (f/ratio) they will give same field of view with the same eyepiece because their focal length is the same and hence obtained magnification. This goes to show that F/ratio alone can't be used to determine true field of view. F/ratio can tell you something though - slower the scope - more likely it is to have longer focal length, and longer focal length means higher magnification with given eyepiece - which in turn means smaller field of view. However this all also depends on aperture (there are two things that go into F/ratio - aperture and focal length). For this reason, while in principle F/ratio is related to true FOV, you need to know what you are comparing (like same aperture scopes with different F/ratio, or different F/ratio but same focal length, or in general case - everything is different - so you need to use above calculation to determine TFOV). Here is another good thing to know - in general, maximum useful magnification of a telescope is about twice that of aperture in mm. So 70mm scope will have max magnification about x140. 80mm will have max magnification of x160. This rule applies for "perfect" scopes. If scope has some issues - like chromatic aberration or less then perfect figure - you will not benefit from such magnification. On the other hand magnification is also limited by stability of atmosphere. In general average seeing conditions (seeing means how "wobbly/blurred" image is due to atmosphere) you will be limited to x100-150. In better seeing conditions you can go to max x200. Only in excellent seeing conditions it's worth going over x200. Magnification is not everything. It is useful for planets and the Moon and some types of observing like splitting close double stars. Aperture is more important in the scope. If your preferences are sharp views of the planets - get your self first scope 70mm F/7.1. If your preferences are deep sky observing - get the scope with 80mm aperture - second one. In answer to your question relating to levels of chromatic aberration in refractor - there is nice little chart that sums it all, let me see if I can find that for you: According to this chart, 80mm F/5 scope is in "yellow zone" - CA (short for chromatic aberration) will be visible and obvious, and in principle you can reduce it with appropriate filter. 70mm F/7.1 will also show it, but it will not be as obvious (only on higher magnifications) and you will be able to filter it pretty well with appropriate filter. I have 102mm F/10, which according to this chart will show about the same level of CA as 70mm F/7.1. In that scope - CA is there, you can see it on high magnifications, on very high magnifications it is obvious, while on medium you need to search for it. This is of course provided that target you are looking at is bright enough - like the Moon, or planets, or very bright stars. Using appropriate filter almost completely removes it. I used to have 102mm F/5 scope. That is a bit worse than 80mm F/5 scope and it was all but usable to view planets on anything than low power views. Such telescope is wide field deep sky scope - good for fainter stars / star clusters / wide nebulae and galaxies all viewed on low to medium power.
  4. Yep, just realized that I started with two reasons, but listed only 8bit .... it's time to go on a vacation and give ol' brain a rest ...
  5. Like others said - it is great start. You probably have more data in there that could be pulled out with proper processing, but like imaging, processing also has a learning curve - and with practice you will get better at that as well. 15 minutes of total imaging time is short, and you should not expect great results out of it - so for next target - do a bit more - results will be much better if you go for an hour or two, or even four hours total (that is about as much as one can get on single target per night) Now to address your questions, and some general tips. This can only be corrected with flat calibration. You need to take flats for your setup to correct any vignetting / dust shadows that appear on the image. Look it up on internet. Best way to do flats is to have flat panel, but in absence of one people use T-shirt and do flats when it is still some light out (either in the evening at setup time, or early in the morning after night session). You can also use laptop/tablet screen with some white image displayed full screen. Aiming scope at uniformly lit white wall can be used as well. You need to take your flats while everything is still set up, because flats need same focus, same camera orientation and all the dust to be in the same place, so there is probably no chance to do it now to correct this image. You can try to correct it in post processing though - select with circular selection just that part of the image and adjust brightness a bit to make it the same as surrounding. It is easy fix in this case since there is not much of anything in that part of the image. This is produced by hot pixel. Simple way to deal with this is to use sigma clip stacking method. For it to work the best you need a lot of frames - it is statistical method that estimates if some values are too large or too small and does not include them in result. One more reason to get more frames and have longer total imaging time. With 20 frames it will still work, so give it a go. This is a big NO-NO. Only time to use jpeg in astro photography is when you have finished image and you want a small file to put on web (post here or elsewhere). Jpeg has two features that make it's use highly problematic in astro processing. First is that it is only 8bit per color format - you loose much of precision in 8bit formats. Ideally you want to use 32bit (tiff or fits) as your intermediate save format between processing steps. Sometimes you can get away with using only 16bits but I strongly recommend against this as well. It's best to use 32bit format. I personally use Gimp 2.10 for much of my processing. Others use Photoshop, and very often dedicated astro processing software like Pixinsight (which is probably too complex for you at this stage, get a bit more practice in both imaging and processing before you consider purchasing it - it has way too many options that will just confuse you at this stage). Have a look at recent thread for simple processing steps in Gimp: This is somewhat advanced topic - proper color in astro images. For the time being you should just try to do color balancing your self with color mixer / curves to get the feel for it, and leave color calibration for later. You can increase saturation in processing if you want a bit stronger colors, but don't over do it - it gives unnatural feel to the image if you push it too much. This is generally done via histogram manipulation - curves and/or levels. However, your background is fairly dark and in general you don't want your background to be completely black - it creates artificial feel in the image and you run a risk of clipping your signal. Most natural looking astro image has black color (or rather very dark gray) having value of about 4-5%. If you like, you can post your unprocessed stack (try to save it as 32bit format, and if you can't - at least 16 bit) and I'm sure people will do some processing on it so you can see exactly how much data there is in your image and what sort of result you should be striving for (maybe even get some processing tips for your image that you can try yourself for practice).
  6. Not sure if the issue is with flats. Inspect your subs - you might have couple of them with high altitude clouds creating uneven background.
  7. I pretty sure that you don't have to give a special price based on that - It can't be seen on the image, and I'm pretty sure that it will not impact the performance of the scope. Maybe just a full disclosure about it and what you think would be fair price for second hand item given it's overall condition?
  8. Ah forgot - I did bin x2 in ImageJ as well - prior to background removal and color calibration.
  9. ImageJ / custom plugin written by me was used to remove background gradient. It looks for faintest pixels in the image and tries to do linear fit on those pixels only. I do multiple rounds of this fitting / removal (until residual gradient is small enough compared to signal in the image) Next was color calibration - I again used ImageJ to measure star value and do pixel math to multiply each channel with proper value for white balance (I used Stellarium to find suitable star, however it looks like Stellarium has wrong/outdated information in stellar color indices due to catalogue used, and this sometimes fails - in this case there were excessive red cast after this calibration). Result of these two operations is linear Tiff that I provided link to. After that, it was all Gimp 2.10 - levels/curves to do the stretch and do slight color corrections (toned down red, boosted green in bright areas to get nice yellowish color in galaxy core, and boosted low values of blue to give outer star lanes a bit of a "pop"). Horizontal / vertical flip is self explanatory. In the end I did a bit of selective noise reduction. This is done by creating another layer of the image. On this layer you perform noise reduction - I used G'mic-qt under filters, then selected "repair" and somewhere down the list there is wavelet denoise. I ran default settings. This tends to blur details somewhat, so we need to selectively mix this layer into base layer. We do this by adding a layer mask to denoised layer - value only and inverted. This will blend denoised version in dark areas only (mostly background) - you can adjust level of this effect by selecting opacity of this denoised layer lower than 100% (I think I set it somewhere around 70-80% - was not paying attention to actual number and was looking more at the image to find most pleasing mix).
  10. Here we go again (it's 28MB so it should not be an issue, but in previous post it failed twice for some reason): Ok, it failed again ... I'll upload it to my server and post a link for download here ... http://serve.trimacka.net/astro/Forum/2019-09-01/post_01/andromeda.tif In the meantime, here is a bit more tweaked version (need to stop now, as I'll just get into that "which version is better" mental loop ... ) - horizontal flip - slight tweak on green curve to make galaxy core yellowish rather than reddish - slight tweak on blue curve to make blue low range a bit more "sparkly" - selective wavelet denoising ...
  11. Ok, so here is 32bit tiff with some things done to it: 1. I binned x2 to recover some of the SNR 2. I wiped the background of any gradients 3. I tried color balance on one star that Stellarium classifies as color index 0.32 (which should be white star). Results are not the best - too much red color, but it can be fixed with curves - like I did with my processing. 4. Vertical flip to put andromeda in more familiar orientation (maybe should have done horizontal flip as well ). I did not do any noise reduction - just levels / curves (levels as shown above and a bit of curve adjustment to kill off excessive red) stretch and I got this result: For some reason, I can't upload linear tiff - it fails. Let me try in another post ...
  12. Ok, this is only in Gimp with 32bit data - fiddling with levels and curves: But there is gradient in the image, so I'm going to see it removed now and look if results improve things (and of course they should).
  13. Ok, yes, I see what you mean and I probably jumped the gun on gradient removal. I'm not sure if I can do it in Gimp - will need to check if any plugin is capable of doing it properly. I do it in ImageJ with plugin that I wrote. I can remove the background and then post you results so you can give it a go at processing with clean background. There are couple of paid software out there that can do it, but I did not try any of them. I've heard of gradient exterminator (plugin for PS). Pixinsight has dynamic background extraction (often abbreviated as DBE). Iris which is free software can also do it: http://www.astrosurf.com/buil/iris/tutorial4/doc14_us.htm But it is a bit involved. Color balancing is rather easy once you have your background removed. Simplest form is to find white star in the image (using Stellarium or similar - look for F2 class star) and measure R, G and B levels. Then multiply needed channels to get white color - or same values of R, G and B (for example if you measure R to be 0.4, G=0.6 and B=0.2 then you need to multiply R with 2.5, G with 1.66666 and B with 5 - which are inverse of measured values). More advanced version of color calibration would be to measure bunch of stars and determine transform matrix that give you proper RGB triplets depending on stellar class (or rather temperature). I'll have a go at a bit more serious processing and I'll post results as well as cleaned image (and maybe color calibrated) for you to also have a go at processing it further.
  14. In DSS, after stacking, just save as 32bit - either Tiff or Fits For item marked as 2 select 32bit float point For item marked as 3, depending on type, select some level of compression to get smaller file (TIFF supports compression, but Fits does not) In options choose not to apply adjustments to the image (either embed it or just ignore it, but don't apply it)
  15. You blown core a bit. Well, not a bit, a bit more You also need to remove gradient and do color balancing ... Could you post 32bit version of the stack? I think pretty decent image can be obtained with some fiddling around
  16. Video is private, so I can't see it. But you can check visually if different recording/playback speeds have been used - don't look at planet drift speed - look at speed of seeing - if it is "dancing" faster on Saturn video - you have it on "fast forward"
  17. Ah, forgot one very important thing - you say you did use modified web camera. Saturn is fairly dimmer than Jupiter - this generally means that if you want to create video and have both planets properly exposed on that video - video of Saturn is going to use longer exposure. Let's say you did 33ms exposure on Jupiter and something like 100ms exposure on Saturn. You now created regular video out of both recordings - one that runs at 30fps. Jupiter will move at normal speed because each frame is 1/30 of a second. Saturn will move three times as fast because one frame is now 1/10 of a second, but movie is displayed not at 10fps but at 30fps - three times faster speed than it was recorded at. (don't know actual exposures that you used - just showing that because different exposure lengths you can get different playback speed and therefore drift rate that appears different when in fact it is playback rate that is different).
  18. Ah, ok. I'll record my steps to get to above image - very basic levels stretching. Btw, you should really use 32bit precision when saving stack result in DSS. Don't do any stretching in DSS either - just leave result linear (don't know if you did do any stretch in DSS but "bulk" part of M31 did seem quite bright like there was some histogram manipulation applied). Anyways, here we go: First thing I did was to convert image to 32bit format (this won't restore missing bits, but it won't introduce any more rounding, at least not significant when working with image). (convert in linear light, although I'm not sure if it makes any difference) Next do one round of levels: Move right "white" slider left until you see that galaxy core is starting to saturate, than back off a bit - you want to move it left while not making any part of galaxy core saturate. Move left "black" slider at the foot of histogram, again leaving some room. Move middle "gray" slider to the left. Don't worry if everything in the image turns white - we will correct this in next step. There is no "proper" place to put it - you need to do it by feel - the more left you drag it - more chance background noise will be seen in next phase - if you don't drag it enough your image will not be stretched enough. Do another round of levels - this time we will make adjustments to bring everything "in order": This time don't move white slider - we moved it as far as it needs to go - any more and you will cause clipping in high signal - and we don't want that. For black - do the same - bring it at the foot of histogram. Use gray point to make adjustments to how exposed you want image to be. There you go. Two rounds of levels and you have data visible.
  19. No, that can't be the reason. Any motion of the planet across the sensor will be due to Earth's rotation. Rate of relative motion of both planets to Earth is so slow that you need like an hour to notice something on the sensor (it's like couple of arc seconds an hour). We need to ask a simple question - why did each planet move across the sensor in the first place? Follow up question would be - were both of them shot at same focal length (same setup). If you were using equatorial mount and tracked both planets - then motion of the planets on the sensor can be explained by few factors: 1. Improper tracking speed (like Solar / Lunar instead of sidereal) 2. Poor polar alignment causing drift 3. Periodic error of the mount Different focal length will cause different "appearance" of the same drift rate on the video simply because for same drift speed in arc second / second will translate into different px/second if sampling resolution (arcsecond/pixel) is different - regardless of the drift cause. At eyepiece - this is equivalent to magnification - in low mag EP drift seems slower than at high mag. From above 1) would give same drift rate, so it is hardly the cause. 2) will give different drift rates in RA/DEC depending on the part of the sky and direction of PA error - this can give a feel of different drift rates on sensor if one is horizontal and other is vertical or at an angle. Above point 3) will depend where on period cycle you are currently at - mount can be tracking slower then sidereal, faster than sidereal and about the same as sidereal. It might even happen that one planet is drifting one way and other - the other way if depending on whether the mount is leading or trailing compared to sidereal. If you are using Alt/Az mount there could be different reasons for drift - like mount not being precise in "knowing" where it is pointing - that will cause small drift. If using non tracked mount - it can be illusion because of already described - one planet could be drifting horizontally, and other at an angle. Diagonal of sensor is longer than it's side and planet will be visible for longer - that can give impression that it is moving slower, when in fact it is moving at the same rate - but longer distance.
  20. Quick stretch shows that there is a lot going on there: (again using Gimp 2.10) - I just did basic stretch to show what has been captured, not actual processing. How did you stack? May I suggest to use 32bit floating point as file format instead of 16bit? Added precision is necessary if you are using DSLR (14 bit single sub, about 48 of subs, right?). What is it that you are not happy about with this image?
  21. That is just exceptional! I love the depth and 3D feel of it.
  22. 1um will be provide 0.1" resolution with about 2 meter radius. Might be feasible to put it in 1-1.5m diameter if you have sub-micron resolution and are happy with about half to quarter of arc seconds in angular precision. Not sure if those linear encoders can bend?
  23. There are couple of ways you can determine mount pointing with encoders: Motor shaft encoders / tick counting - this is for example how Heq5 mount does it. It measures motor shaft position / number of revolutions. It would work perfectly if gearing from motor to last stage was perfect - no periodic or non periodic of any sort. But there is difference between what motor shaft outputs and the position of the scope in the sky, and while you think you are pointing at certain spot - you are pointing to somewhere else. Double low resolution encoders - it's like low bit counter / high bit counter, or if you are not familiar with that, best way to explain it would be - one encoder keeps "hundreds" and other keeps position within a hundred (0-99) when you combine them you get actual position - it's a bit like old analog clock - small hand will give you hour and large hand will give you minutes - combine the two you will get exact time. A bit more precision over precious - motor shaft option, but how much precision depends on resolution of encoder on main shaft (hour hand). Absolute encoder on the main shaft. It does not need to be absolute encoder, but it needs to have sufficient precision to determine exact shaft position - something like 28bits to have arc second / sub arc second precision in pointing. I assume that we want to reach a sort of precision that is needed for given requirements - that means imaging at around 0.75"/px. This means that you want guide performance of about 0.1- 0.2" RMS. You want to be able to have something like 10 seconds between guide exposures. In these 10s max that mount can deviate from true position should be around 0.3". Which then translates in max drift rate of 0.03"/s. This is very precise tracking. Problem with Alt-Az type mount is that your speed in Alt and speed in Az are not constant - they change every second, and depend on where scope is (or better - should be). This is not so with EQ type mount. In normal operation, RA motion is constant, and DEC motion is 0 - wherever you are pointing. If mount makes tracking error - and goes a bit "forward" or a bit "backward" that will not impact DEC rate - it will remain 0. Similarly if there is drift in DEC because poor polar alignment, RA rate of motion will not be affected by this. With Alt-Az type of mount - change in position requires correction of both Alt rotation rate and Az rotation rate to properly keep correct pointing. If there is error in any of these two, scope will be pointing at the wrong place, but will think it is pointing elsewhere and will change Alt and Az rate accordingly which in turn will make it drift more - further from wanted position, and again it will calculate improper tracking rates and drift more .... For this not to happen, you need most precise encoders that you can have - and those would be full resolution encoders (either incremental or absolute) on Alt and Az axis of the scope.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.