Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

vlaiv

Members
  • Posts

    13,092
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. In DSS, after stacking, just save as 32bit - either Tiff or Fits For item marked as 2 select 32bit float point For item marked as 3, depending on type, select some level of compression to get smaller file (TIFF supports compression, but Fits does not) In options choose not to apply adjustments to the image (either embed it or just ignore it, but don't apply it)
  2. You blown core a bit. Well, not a bit, a bit more You also need to remove gradient and do color balancing ... Could you post 32bit version of the stack? I think pretty decent image can be obtained with some fiddling around
  3. Video is private, so I can't see it. But you can check visually if different recording/playback speeds have been used - don't look at planet drift speed - look at speed of seeing - if it is "dancing" faster on Saturn video - you have it on "fast forward"
  4. Ah, forgot one very important thing - you say you did use modified web camera. Saturn is fairly dimmer than Jupiter - this generally means that if you want to create video and have both planets properly exposed on that video - video of Saturn is going to use longer exposure. Let's say you did 33ms exposure on Jupiter and something like 100ms exposure on Saturn. You now created regular video out of both recordings - one that runs at 30fps. Jupiter will move at normal speed because each frame is 1/30 of a second. Saturn will move three times as fast because one frame is now 1/10 of a second, but movie is displayed not at 10fps but at 30fps - three times faster speed than it was recorded at. (don't know actual exposures that you used - just showing that because different exposure lengths you can get different playback speed and therefore drift rate that appears different when in fact it is playback rate that is different).
  5. Ah, ok. I'll record my steps to get to above image - very basic levels stretching. Btw, you should really use 32bit precision when saving stack result in DSS. Don't do any stretching in DSS either - just leave result linear (don't know if you did do any stretch in DSS but "bulk" part of M31 did seem quite bright like there was some histogram manipulation applied). Anyways, here we go: First thing I did was to convert image to 32bit format (this won't restore missing bits, but it won't introduce any more rounding, at least not significant when working with image). (convert in linear light, although I'm not sure if it makes any difference) Next do one round of levels: Move right "white" slider left until you see that galaxy core is starting to saturate, than back off a bit - you want to move it left while not making any part of galaxy core saturate. Move left "black" slider at the foot of histogram, again leaving some room. Move middle "gray" slider to the left. Don't worry if everything in the image turns white - we will correct this in next step. There is no "proper" place to put it - you need to do it by feel - the more left you drag it - more chance background noise will be seen in next phase - if you don't drag it enough your image will not be stretched enough. Do another round of levels - this time we will make adjustments to bring everything "in order": This time don't move white slider - we moved it as far as it needs to go - any more and you will cause clipping in high signal - and we don't want that. For black - do the same - bring it at the foot of histogram. Use gray point to make adjustments to how exposed you want image to be. There you go. Two rounds of levels and you have data visible.
  6. No, that can't be the reason. Any motion of the planet across the sensor will be due to Earth's rotation. Rate of relative motion of both planets to Earth is so slow that you need like an hour to notice something on the sensor (it's like couple of arc seconds an hour). We need to ask a simple question - why did each planet move across the sensor in the first place? Follow up question would be - were both of them shot at same focal length (same setup). If you were using equatorial mount and tracked both planets - then motion of the planets on the sensor can be explained by few factors: 1. Improper tracking speed (like Solar / Lunar instead of sidereal) 2. Poor polar alignment causing drift 3. Periodic error of the mount Different focal length will cause different "appearance" of the same drift rate on the video simply because for same drift speed in arc second / second will translate into different px/second if sampling resolution (arcsecond/pixel) is different - regardless of the drift cause. At eyepiece - this is equivalent to magnification - in low mag EP drift seems slower than at high mag. From above 1) would give same drift rate, so it is hardly the cause. 2) will give different drift rates in RA/DEC depending on the part of the sky and direction of PA error - this can give a feel of different drift rates on sensor if one is horizontal and other is vertical or at an angle. Above point 3) will depend where on period cycle you are currently at - mount can be tracking slower then sidereal, faster than sidereal and about the same as sidereal. It might even happen that one planet is drifting one way and other - the other way if depending on whether the mount is leading or trailing compared to sidereal. If you are using Alt/Az mount there could be different reasons for drift - like mount not being precise in "knowing" where it is pointing - that will cause small drift. If using non tracked mount - it can be illusion because of already described - one planet could be drifting horizontally, and other at an angle. Diagonal of sensor is longer than it's side and planet will be visible for longer - that can give impression that it is moving slower, when in fact it is moving at the same rate - but longer distance.
  7. Quick stretch shows that there is a lot going on there: (again using Gimp 2.10) - I just did basic stretch to show what has been captured, not actual processing. How did you stack? May I suggest to use 32bit floating point as file format instead of 16bit? Added precision is necessary if you are using DSLR (14 bit single sub, about 48 of subs, right?). What is it that you are not happy about with this image?
  8. That is just exceptional! I love the depth and 3D feel of it.
  9. 1um will be provide 0.1" resolution with about 2 meter radius. Might be feasible to put it in 1-1.5m diameter if you have sub-micron resolution and are happy with about half to quarter of arc seconds in angular precision. Not sure if those linear encoders can bend?
  10. There are couple of ways you can determine mount pointing with encoders: Motor shaft encoders / tick counting - this is for example how Heq5 mount does it. It measures motor shaft position / number of revolutions. It would work perfectly if gearing from motor to last stage was perfect - no periodic or non periodic of any sort. But there is difference between what motor shaft outputs and the position of the scope in the sky, and while you think you are pointing at certain spot - you are pointing to somewhere else. Double low resolution encoders - it's like low bit counter / high bit counter, or if you are not familiar with that, best way to explain it would be - one encoder keeps "hundreds" and other keeps position within a hundred (0-99) when you combine them you get actual position - it's a bit like old analog clock - small hand will give you hour and large hand will give you minutes - combine the two you will get exact time. A bit more precision over precious - motor shaft option, but how much precision depends on resolution of encoder on main shaft (hour hand). Absolute encoder on the main shaft. It does not need to be absolute encoder, but it needs to have sufficient precision to determine exact shaft position - something like 28bits to have arc second / sub arc second precision in pointing. I assume that we want to reach a sort of precision that is needed for given requirements - that means imaging at around 0.75"/px. This means that you want guide performance of about 0.1- 0.2" RMS. You want to be able to have something like 10 seconds between guide exposures. In these 10s max that mount can deviate from true position should be around 0.3". Which then translates in max drift rate of 0.03"/s. This is very precise tracking. Problem with Alt-Az type mount is that your speed in Alt and speed in Az are not constant - they change every second, and depend on where scope is (or better - should be). This is not so with EQ type mount. In normal operation, RA motion is constant, and DEC motion is 0 - wherever you are pointing. If mount makes tracking error - and goes a bit "forward" or a bit "backward" that will not impact DEC rate - it will remain 0. Similarly if there is drift in DEC because poor polar alignment, RA rate of motion will not be affected by this. With Alt-Az type of mount - change in position requires correction of both Alt rotation rate and Az rotation rate to properly keep correct pointing. If there is error in any of these two, scope will be pointing at the wrong place, but will think it is pointing elsewhere and will change Alt and Az rate accordingly which in turn will make it drift more - further from wanted position, and again it will calculate improper tracking rates and drift more .... For this not to happen, you need most precise encoders that you can have - and those would be full resolution encoders (either incremental or absolute) on Alt and Az axis of the scope.
  11. This has me worried somewhat. I see myself owning a Mesu 200 in some (distant) future, and it looks like it's a bit fussy with regards to balancing. How perfect does balance need to be for it to work properly?
  12. In principle you can, and you should for Mesu, as there is no point in clearing backlash on mount that should not have any. I'm not sure it is going to solve your issue, it is just going to skip clearing DEC backlash and go straight into DEC calibration where it will again complain about not detecting any motion in DEC (if everything stays the same). Could be that software upgrade will solve the issue.
  13. I think that this is crucial piece of information. PHD2 will do clear backlash until it sees guide star move on guide command. It is clearing backlash in DEC before doing DEC calibration (RA calibration went ok). So it tries to clear backlash and does that great number of times, then gives up and and tries going North - and then it complains not enough movement in DEC axis. This simply means that: a) for some reason guide commands in DEC axis are not working properly b) DEC guide rate is set so low that it cannot detect motion of the star even if guide commands are working properly (again very unlikely that 110 moves would not cause guide star to move even if single step was very small). c) it guides normally but friction drive is slipping in DEC (very unlikely because it slews normally, and it is more likely that it would slip in slew due to torque applied than in simple guide command)
  14. I'm going to expand a bit why absolute encoders are necessary for Alt-Az type mount if you want accurate tracking - it is not the same as with GEM mounts where you increase precision of tracking - it is for accurate tracking with AltAz. Maybe diagram is going to explain it better - I'm going to exaggerate curvature of things so you can see more easily what is going on. Imagine scope can't precisely determine Az position - there is certain error. Scope is just a tad past meridian, but "thinks" it is pre meridian in position. Software will tell scope that Alt should increase a bit, but in reality needed Alt position will decrease a bit - tracking will create error in Alt because it is going "the wrong way", and guiding will combat this instead of only correcting things it should correct. Similar thing applies to guiding as well - it needs to know direction of vector - where Alt is pointing and where Az is pointing in order to give proper corrections - if orientation of scope's Alt/Az is different from that of guide system - wrong corrections will be given and guiding will not work to optimum - it is what happens in GEM mounts when you have wrong guider calibration. Difference between GEM guide system and Alt/Az system is that Gem guide system in principle needs calibration only once (although people do it every time when changing target / part of the sky - good practice because of cone errors and fact that RA/DEC axis are not always perfectly at 90 degrees to each other). With Alt/Az - you need constantly changing calibration, and software needs to track this and change things, but in order to do so accurately it must know exact pointing of the scope. This is why you need absolute encoders. Not sure what is needed precision of such encoders, but we might be able to calculate drift rates depending on guide command cycle length and position in the sky.
  15. Yes, only problem with this approach is that you assign single weight per sub and that is not optimum solution. Level of noise consists of multiple components - one of them being shot noise whose level depends on signal. Imagine following scenario - there is one frame that is lower in signal by 10% due to poor transparency, and other frame that has a bit more light pollution. Once you equalize frames, you end up with same amount of noise in the background where is no signal (only LP noise and read / thermal noise), but because signal was lower by 10% and you had to scale it to equalize it - so does you scale shot noise (and LP noise and read noise - but since LP was higher in first image these turn out equal where there is no signal). Now you have situation where background weights should be 1/2 : 1/2 because level of background noise is equal, but in signal area weights are different - might be something like 1/2.1 : 1/1.905 (or whatever proper ratio is to get total of 1). In general case you want to assign weights based on signal level in particular area, and also in patches of image (because LP can be gradient and contribute different levels of noise to different parts of image) - and this is not really something that is easy to do. I've developed signal based weights estimation and it works really good. You can assign "bins" like 16 or 24 different signal level bins, and all the pixels mapped to each bin get their weight and participate in their own "stack zone".
  16. Yes you can if you know quality of each sub in comparison to all other subs. Transparency changes from night to night (and even during the course of one night), and although you know number of subs in each separate stack and their relative weights in one night (for example 1/3, 1/3, 1/3 - because they are of the same quality) - you have no way of knowing how quality of subs on one night compares to subs on different night. Weighting works by comparing all the subs in stack and assigning weight based on relative quality within the stack, so if you want proper weights with respect to all the data - put everything in one stack. There is also matter of other algorithms that we use - like sigma clipping and such - read other posts to get the idea of how it works, but in principle - more subs you have, better statistics of the data and more precision in statistical methods like sigma clip.
  17. Well, at least we agree on minimum accuracy needed, that being 1/10 of arc second . But from what I can tell, you have very high gear ratio after that. I'm not sure if you were recommended 9000:1 because of motor torque - it needs a lot of reduction for small motor to move such a large scope. But let's run precision calculation again with this new info. So motor has about 10:1 gear reduction (and it better be smooth since encoder is pre reduction from what I understand in your quote). 500 x 4 x10 x 60 x 60 x 2.5 = 180000000 (and that is quite a lot) - that is 555.555 ticks per arc second (if I'm not mistaken). As far as precision goes - that is exceptional, but more precision you add, lower your slew rate. Do you have any idea of what the max RPM is on this motor? Found it - it says here: http://siderealtechnology.com/MotorSpecs.html "Maximum RPM with SiTech ServoII controller (24 volt Supply Before Reduction): 10,000" That is 166.66 revolutions per second. One revolution is 1.6666 arcminutes, so it's 277.777777 arcminutes per second, or 4.63 degrees per second. Ok, that is actually quite fine (if my calculation is correct). As far as I can tell, Alt drive calculations are quite ok. What I'm worried about next is smoothness of the drive components (any little bump is going to be amplified significantly at those reduction ratios), and lack of absolute encoders. We should discuss how much exact positional information impacts on tracking and guiding rates in alt az. This is also very important, as you can't expect 0.1-0.2" RMS tracking/guiding error if drive itself makes wrong move because it does not know exactly where it is pointing.
  18. Could you go a bit more into detail regarding precision of motors. 9000:1 is total reduction of motor spin right? Let's say that you need to move thru 90 degrees in altitude. You also want at least 0.1" precision in altitude position (to be able to guide properly, and possibly you will want more precision). This gives you 100 revs per degree, or 1.6666 revs per arc minute, or 0.027777 motor revolutions per arc second, so we are looking at 0.002 revolution of a motor per "step". That is something like 0.72 degree precision on motor shaft, or 1/500 accuracy. I guess that should be doable with 10 bit encoder on shaft if we are talking about servo motors.
  19. I think this is very nice setup, and I would really love idea of "half slit" integrated with it, or "quarter slit" at least - that way one can still have 3/4 of the field usable for multiple star spectra at once, plate solving and such and use slit to get better spectrum, or at least easier to calibrate and remove background glow. Out if interest, 40mm + 25mm reduces things about 40/25 = x1.6 am I right about that? I was thinking about similar setup for EEVA - 32mm EP + something like 10-12mm cheap Chinese "megapixel" lens for CCTV (actually a bit larger than standard 1/3 ones - I've found 1/1.8 one for reasonable price of $25 on aliexpress).
  20. Having thought about this, I'm worried If you are going to go for something like 0.79"/px, and aim to properly sample at this resolution, you will be wanting something like 0.1-0.2" RMS error in tracking/guiding. I'm not so much worried about the drive (although that is also major concern), I'm worried about smoothness of Alt-Az mechanism. It needs to be extremely rigid, yet so smooth in motion that it does not have even slightest "stiction / jerk" anywhere along the arc of motion. All of that holding something like 200kg+. I believe this calls for exceptional machining precision and knowledge of materials. You would not want your mechanics seizing due to temperature change or becoming jerky or whatever, and there is also potential for forming a slack in hotter conditions - at this scales, with such large parts - it can easily happen. Things can even get out of shape if load is not spread evenly .... Another thing to consider is that you are going to need custom software written for this. No guiding software, as far as I'm aware, guides in alt-az mode. Software needs to know exact pointing of the scope to properly calculate needed shift depending on guide command. Same goes for tracking software - it needs to know precisely where the scope is pointing at any given time - that means either full precision absolute encoders on both axis ($$$) or some sort of split configuration with calibration - meaning lower resolution encoder on both axis and lower resolution encoders on drive shafts. With Ra/Dec system it is fairly easy to determine tracking rate and needed "resolution" of the motor (provided you are using steppers) to keep things within certain limits. For Alt-Az, this is not so easy calculation, as rotation rate will change with respect to where the scope is pointing. You will also be giving up best position in the sky for imaging - near zenith, as alt-az has trouble properly tracking in this region of the sky. Just some things to consider if high resolution imaging is one of your goals.
  21. Yes, if it produces single image in the end, then you are doing it right - groups in DSS are "calibration" groups rather than separate stacks
  22. What did you use to stack your data? I presume PI? Noticeable improvement if you have same sized batches usually comes from the fact that there was difference in the conditions on particular night compared to other nights. Let's say that a single night out of three had poor transparency. Subs of that particular night will differ somewhat between each other but overall weights will be similar - close to 1/7 each. Same will be on other nights. When you combine stack from all three nights you will end up with weights close to 1/21 for each sub. If one night was poorer than others - this is not optimal since there can be significant difference between quality of subs on different nights and if you put them all in one stack - each will be given appropriate weight, so you can end up having 1/18 for good subs and something like 1/25 for poor subs - poor subs will contribute less to final result this way and good subs more. In case of stacking stacks from each night, because subs on each night are close in quality to each other you end up with each contributing about the same to final result - and that is not what you want if you have different quality subs. This is a part of explanation. Another part is related to use of sigma clip stacking. More subs you have sigma clip works better. Let's say sigma clip "decides" to reject one or two pixels. In stack of 7 subs this leaves you with 5 subs stacked, or 40% difference! In stack of 21 subs you are left with 19 subs - maybe even 18 subs if sigma clip rejects three values - but this time it is only 16% "loss". More subs you have it is better with sigma clip. It really boils down on your stacking algorithm. If you have equal number of subs per group - like in your case 3x7 but you don't use any sort of advanced stacking methods, and use simple average instead - result will be the same. On the other hand if you have mismatched groups - like 8 subs on first night, 7 on second, 6 on third or similar - this straight average will give worse results then advanced methods when stacking stacks, but in general best approach is to stack single subs.
  23. Not sure how the DSS works. I think groups are calibration groups and it produces single image at the end? If so, then continue using that approach. If you get three separate images as result (provided that you used 3 groups) then use one group (but not sure about calibration - can you add different calibration files to one group?)
  24. Not best way to do it. Ideally you want all your subs stacked into single stack. You can create "mini stack" out of already stacked data, but chances are that it's not going to be optimally stacked, especially if you captured different number of subs on different evenings. Imagine you have 3 subs on first evening, and 5 subs on second - you create your stack for first and second night and you will have roughly 1/3, 1/3, 1/3 in first stack and 1/5, 1/5, 1/5, 1/5, 1/5 in second Now you combine those two, and you will end up with this: 1/6, 1/6, 1/6, 1/10, 1/10, 1/10, 1/10, 1/10 (each again divided by two) - so subs from first night are "weighted" more in result than subs from second evening, where in reality you want close to 8x 1/8 to be the weights. PI has weighting as an option and it works better if you put all your subs "in the same basket" rather than create separate averages and then weigh them against each other. If you for some reason don't have previous subs and only final stack (still linear without any processing) then yes - do it like that - create "mini stack" out of 2 or 3 stacks from each night. Result might not be optimal, but it will still improve things.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.