Jump to content

vlaiv

Members
  • Posts

    13,265
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. Anthropic principle is rather simple to understand. Imagine you win a lottery with very small odds of winning. You might say - what are the odds of me winning the lottery? How lucky I am? But in reality - someone had to win it and who every won it - can be asking themselves the same question. One person, how ever unlikely will certainly win the lottery. Similarly - universe with a set of physical laws - how every unlikely that particular version is - will happen. If it produces us - we have the right to ask, why us? Or rather - why is it the way it is and answer is - because it is random thing. On the other hand - there is different way of looking at "fine tuning". We can think of it in terms of thermodynamics. Imagine box of gas with bunch of molecules whizzing around. If gas molecules are equally distributed we call that state - a low entropy state. If all the molecules are in fact in one part of the box - say left half of it - that will be high entropy state. Now what is important to note is that both of such states are equally possible - any given state has the same probability, but not all the states are equal. There is particular "quality" of all the molecules being on left side of the box. It is actual physical property of such configuration that makes a distinction between it an other configurations. In this case - it is quality of energy or level of orderliness in the box. Similarly - while all different configurations of universe are equally probable - not all have equal "quality". Here we must be careful - in fine tuning we ascribe "significance" to this quality. Significance is totally nonphysical thing and comes from our mind - what we value. On the other hand quality is something that can be mathematically described - in similar way we describe quality of energy. Ability to do more work for same amount of energy means higher quality energy. Is that better or worse? or is it significant? that is totally up to interpretation, but we must agree that our universe is high quality one - compared to average universe. However - there are higher quality universes in above definition, and the fact that we live in high quality universe, in my view does not warrant ascribing a significance to it in the way of "fine tuning". Just acknowledging that we live in higher quality universe than average or median universe is enough, me thinks. I don't understand why people shy away from designed universe in discussions? I'm not talking religion here. There are many forms of universe by design, and if we don't explore or discuss such idea - it's our loss. Just earlier this evening - I defended position that mathematics = existence. In sense we could say that existence is generated by mathematics, or rather that two are equivalent. Maybe not design - but certainly is "creationism" - although not in common sense of that term (far from it)
  2. Time flows the same for all observers in the same frame of reference. If none of clocks move with respect to each other and spacetime is bent the same for them (for example - they are on the surface of the earth) - then time runs the same for them. We can use clock synchronization https://en.wikipedia.org/wiki/Einstein_synchronisation to ensure that all clocks in our frame of reference have the same t0.
  3. Talk bubble thingy - how many posts (that count toward you post count - and only those in astro related sections count) you have Green + thingy - how many likes and other emotions your comment have. It is sort of "reputation" system. Low post count can indicate novice member (not necessarily - some folk simply don't engage in many discussions - so check join date as well if uncertain), and high "like" count can mean that person contributes to the forum in one way or another (people that post images accumulate a lot of likes, people that give useful advice also get likes, people that post jokes or funny posts get laughs as well).
  4. I've been running Windows 11 on my main (production) machine for quite some time and I can't complain. I got used to it fairly quickly. My previous system was Windows 7, but since motherboard died on me - I decided to upgrade whole machine, so I got compatible hardware for W11 and installed that. It's been stable and worked well ever since. As for older hardware after W10 support ends - well, there will be increase of Linux based machines, I suppose. Why throw away hardware that works if it can be converted to something else with either Linux server or Linux desktop distro.
  5. No, I haven't, thanks for the heads up! I think I'm safe though? It would be very inconvenient since it is my boot/os drive.
  6. Mount moves from east to west as it tracks - and it is supposed to be in balance. If there is slight imbalance - east side being heavier then the west (and that will depend on where the scope is point - east side of pier or west side of pier if extra weight is on "scope" side or "counterweight side") - it will act against motors moving - it will "eat up" backlash in worm gear. If mount "overshoots" - because of too aggressive correction or because it's leading in that moment of time (compared to ideal tracking), and mount is perfectly balanced - there will be backlash in the system and pulse to bring it back wont actually bring it back - it will just "eat up" some of backlash. But if mount is always "leaning" against worm gear because it is not perfectly balanced - any change in RA speed - either forward correction (faster tracking) or backward correction (slower tracking) will instantly take effect. This is not true if setup is west heavy. in above image in top example - object is "leaning" against the side that is pushing it, so it does not matter if system is slowed down or sped up - it will still be in contact with that side. In lower part of the diagram - object is "leaning" against opposite side. Here if we slow down - it will stay at that side - but if we speed up - it will start trailing - which is not good. Just a small imbalance is needed - but you also need to lower guide speed. Too high guide speed will cause too much jerk which is not good. I use x0.25 sidereal.
  7. I'm not entirely sure that is right I think that in some instances you pay much more than thing is worth. Here is an example - EQ6-R costs as much as 50cc moped (give or take). Compare by any metric and you will find that 50cc moped is better value for the money - cost of material, cost of manufacturing, advanced materials, precision manufacturing - you name it. Not to mention things like EQ5 vs EQ5 Pro goto. 2 stepper motors + some plastics and printed circuit board or two (total less than 50 quid) - sold for 400 quid more.
  8. Maybe worm wheel is made out of very soft material (it usually is because there is large difference in wear and tear between worm and worm wheel) and when it is pressed onto a shaft - it gets bent out of shape because shaft is not very round? Edit: maybe they are both round but mounted a bit ex-center for some reason? (Assembly being "low skilled" work?)
  9. There are two sources of backlash in HEQ5/EQ6 drive mechanism. First is motor gear part which is replaced by belt mod and second is worm / worm gear engagement that can be tuned out to some extent. Belt mod removes that first source but does nothing to the second. Ideally if you want to completely remove backlash - you'd need another mod. You would need to add spring load or magnetic load to worm/worm gear engagement. With my HEQ5, I can't completely tune out worm/worm gear backlash because worm gear seems to be not quite circular but squished a bit. If I tune it at one point - it will either bind or have some play at 90 degrees to that. This happens both in RA and DEC - but I've found a way around it. For RA it is not as important if you preload your mount properly (just a bit preload is needed - a bit of east heavy) - that backlash is far from drive gear and guide corrections won't disturb it. With DEC, I can't use above trick as it is not constantly moving, but there is another trick. As backlash depends on the part of the circle - I just setup DEC in such way (using clutches and rotating the ota in DEC) so that I'm at good part of the circle without issues for duration of the session (gets tricky when there is meridian flip as DEC turns for 180 degrees - but if error is equal - you should land on good section again).
  10. Ok, so if I understand correctly now - you have old sessions without flats but with stacked data and with separate subs (which are sort of useless now that you don't have flats, right?), and you wonder if it would be good idea to use those stacked masters with your current data. Well, I'd say it is very easy to try out. It would be better to stack all the data in one stack - but since you are lacking flats - that is a no go. Take your current stacked dataset, and perform following: 1. keep one copy of just new data 2. make mix of old and new data in 1:1 ratio 3. make mix of old and new data in several more varying ratios - based on say total imaging time or "perceived" quality - maybe even let software determine weights. Then simply compare results and select one that looks the best?
  11. That really depends. Different software is written in different way. One software might optimize read performance and avoid saving intermediate files to the disk. Other software will do a large random read of small chunks and write intermediate files to the disk. There could be few orders of magnitude difference in speed of software on a spinning disk because of this. HDDs do suffer from small random read/write performance as each operation involves seek of position by the head. Larger the drive - more so. It also depends if disk is empty or full - or rather on which part of platter data is being stored. Outer parts require much larger head seek travel and hence have larger latency that adds up on bunch of small reads/writes.
  12. I'm sort of confused about what you are actually asking. Will throw out some thoughts that I think might be applicable, so let me know if I got it right: - stacking flats from multiple sessions to produce super master does not make a sense since flats are per session calibration files. Unless you can guarantee that your setup did not change between sessions (like observatory pier without taking apart things and with precise repeatable electronic focuser) - you should not do it as dust particles and vignetting won't align. - stacking darks from multiple sessions is totally doable and, while it will suffer from same things described above - in principle it should not matter as darks tend to have very uniform signal (if cooled to same temperature and having well behaved sensor). My only advice would be to weight final stack of stacks according to number of subs each have. Say that you want to stack one stack made out of 10 subs and another made out of 40 subs. Final stack should be (first * 10 + second * 30)/40. It's a bit like "undoing" initial averages, summing things up and doing global average. - lights behave as described above and I would avoid making stack of stacks if at all possible.
  13. I have TrueNas instance running with 3x2TB WD drives. It's using ZFS raid. I already had two instances of one HDD failing that I easily recovered from - simply by replacing the drives one by one with new ones (this usually gives me opportunity to increase array storage size as I replace all drives once first fails, as others are probably similarly worn out)
  14. Depends on stacking method and number of subs in batches. Look at this for example: 1,2,3,4,5 - average of that is 3 right (15/5)? Take first two - average is 1.5, take second three - average is 4 - average those two and you get 5.5 / 2 = 2.75 2.75 != 3 Similar thing happens with noise and you get different SNR depending on the way you stack - not to mention sigma clip or quality weighing - that work differently depending on what is being stacked, so sub that is low quality might end up in partial stack (because "surrounding" subs are lesser quality as well so it does not look too poor) while it might get discarded if we stack all together (too low quality compared to average of all stacks).
  15. I don't think you can update to a new version as it is a hardware thing - you'd need motherboard with new chipset. Much like USB 2.0 vs USB 3.0 (and other version) - it is upgrade to protocol / frequency that is implemented in hardware.
  16. I think that difference is mostly due to PCIe version used by the drive and NVME port. I matched mine both at PCIe 4.0 version, while yours is probably PCIe 3.0 (and that alone makes x2 difference in max transfer speed).
  17. If you really want to speed things up - get 256 or 512GB NVME SSD to be your "workspace" drive. Keep 8TB drive for backup purposes - store your files there when not processing, but use NVME SSD for processing. With mechanical disks - bottleneck is mechanical side of things - usually capping them at 100mb/s transfer speeds With SSD SATA disks - either controller or SATA interface is bottle neck - usually capping below 600mb/s transfer speeds SATA3 is capable of. With NVME you have very high transfer speeds up to 3-6gb/s (depending on pcie version) and very quick access time. I have SATA SSD and NVME SSD in my machine at the moment: SAMSUNG SSD 830 in SATA version and Samsung SSD 980 PRO in NVME version here are benches of their performance: versus (from top to bottom - you can roughly think of these benchmark results as follows - optimized work with large files, unoptimized work with large files, optimized work with small files, unoptimized work with small files). In fact, I also have small 2.5" HDD attached to SATA interface - here is benchmark of that drive (it is not very fast for HDD standards - it is 5400rpm laptop type drive): Now, we can see that NVME SSD is literally from x50 up to x500 faster than HDD - depending on operation performed. That is enormous performance improvement. Today - it really does not make sense to use HDD for anything else than mass storage / backup storage - usually in form of NAS or SAN (raid and all - for fail safe).
  18. I would do it like this: - background removal from L, R and G - scale channels to same exposure length by dividing with exposure length in minutes. - do pixel math where B = L - (R+G). I'd use ImageJ for this as I have confidence in that software that it won't do anything "funny" to the data (like normalization of result or clipping or anything like that). - finally set some black point to the data (add small DC offset to all channels so that negative values are not clipped by processing software).
  19. I think that StarX removed Alnitak completely, while Starnet preserves non stellar detail better. @drjolo What is your view on this after conducting the comparison? Which one did you find better?
  20. Let's quantify things a bit so that everyone can understand impact of different conditions on final result. I'm going to use simplified calculation for sake of argument - one that does not treat whole SNR - but simply translates loss in signal to additional imaging time. Say that some conditions lower your signal to 66% of original - that simply means that you need to image for x1.5 times longer to get same amount of signal. First thing that you should ask yourself would be - do I think 10% improvement in QE of a camera is a big deal or not? Say that you need to select between two cameras - 60% peak QE and 66% peak QE - what is the time difference? Time difference is also x1.1. You would need to image for x1.1 longer with lower QE camera versus higher QE camera. Let's look at some other things: 1. Altitude above horizon Say that we have extinction of 0.16 magnitudes per airmass. Air mass = 1/sin(angle) 70 degrees: ~1.06418 airmass = ~0.17 magnitudes = ~x1.1695 = ~x1.17 more imaging time or 17% more imaging time 60 degrees: ~1.1547 airmass = ~0.185 magnitudes = ~x1.186 = ~x1.19 more imaging time or 19% more imaging time 45 degrees: ~1.4142 airmass = ~0.2263 magnitudes = ~x1.23 more imaging time or 23% more imaging time 30 degrees: 2 airmass = 0.32 magnitudes = x1.34 more imaging time or 34% more imaging time 2. Transparency Say that instead of AOD 0.1 you decide to use AOD 0.3 skies to image. AOD is aerosol optical depth and 0 is completely clear sky while 0.5 and above is murky. Here is current forecast for reference (along with scale): That is increase of 0.2 AOD or 0.2172 magnitudes of extinction. That is ~x1.2214 more imaging time or 22% increase in imaging time. 3. Improper sampling Say that instead of sampling at 1.5"/px - you decide that is fine that your system is sampling at 1.1"/px and you don't worry about over sampling. Here ratio of signal is proportional to surface of pixel so we have (1.5/1.1)^2 = ~x1.86 or 86% increase in imaging time In the end remember - these effect combine and compound to produce final result. You still think that 10% difference in sensor QE is a big deal?
  21. Synthetic B (let's call it that for simplicity) is not better than regularly captured B. It does not need to be. It just need to be equally good, and since you don't need to shoot it - it saves you imaging time to do something else (like more lum) No, not pointless. We stack to improve SNR. Stacking same sub 10 times will not improve SNR so that is pointless. We take R, G and B to be able to reconstruct color information (we shoot luminance for SNR and luminance information). If we can reconstruct color information equally well from R, G and L-(R+G) - it is not pointless. Sure, noise in B will be combination of noises in R, G and L - but it won't have visual impact. It is same as saying - noise of luminance is derived from R, G and B when doing straight RGB if we extract synthetic luminance and process it separately - but that won't cause any issues. Our brain won't see patterns if any arise from that - noise will be noise. Reason that I mentioned this theory is rather simple. You seem to have plenty data, some of which is in LRGB format, right? Once you already recorded LRGB - it is easy experiment to use just LRG and reconstruct B and compose an image. You also have LRGB version to compare results to. Maybe you'll find that it actually works, so it will save you some imaging time in the future? Who knows? You can always earn your fortune on reselling those blue filters with a profit. If you think that is the problem, hold on to where I start talking about how cheap planetary absorption filters are better option for LRGB imaging than expensive interference ones Something like regular L filter, Wratten #8 and Wratten #29 are all you need to produce accurate color image. Bonus is that you don't get nasty reflections that often come with interference filters.
  22. It's not that you shouldn't - it's more that you don't need to If you think about it - we need 3 components to get the image. 4 components is additional information that we might not need. If 3 components are spaced properly to cover whole 400-700nm range - that is pretty much what one needs to be able to produce color image. I'm not going to bore you with math details and matrix transforms - there is much simpler way to see this. Say we have filters that are spaced as 400-500, 500-600, 600-700nm - being B, G and R (and if you look at most filters - you'll see that they are spaced roughly so). With luminance being 400-700nm. It is clear that if we add B+G+R we get what? Whole range so we can roughly write B+G+R = L (as far as signal goes not SNR). This can easily be rearranged as B = L-(G+R) Blue signal is already contained in the other three in some way. In fact - you can choose to shoot any of the three components out of four (people have done so with RGB without L) and you'll still be able to produce same color image (roughly the same - it depends on filters used - if they are equally spaced, then sure - the same, but if there is some overlap and gaps - then it will be roughly the same - difference subject to size of gaps and overlap). Why blue? Several reasons: - blue scatters the most in atmosphere - blue is attenuated the most in atmosphere - blue often has the weakest QE on sensor Overall - blue is probably hardest to record out of three - R, G and B. This might not be so for some types of targets - like emission nebulae that dominate in Ha and Hb, but for most other targets it will be so. I think that better image can be had if one uses time otherwise spent on getting the blue channel to improve SNR of L channel.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.