Jump to content

vlaiv

Members
  • Posts

    13,265
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. More I think about this argument - more I'm convinced that is in fact true and very good explanation why speed of light must be the same in all directions. I think that same can be concluded (or above argument augmented) from relation of speed of light to magnetic permeability and electric permittivity constants.
  2. I'd blame things with X in their title Try simply removing those last three entries from your workflow to see what you get? Just do Crop and DBE to start with.
  3. I did something like that - but I waited for target to pass the meridian. This was partly influenced by the fact that city center was towards the east and LP was much worse in that direction so it made sense to shoot only towards the west.
  4. Well, first thing stacking algorithm does is to "normalize" frames. Sub shot close to horizon will have lower signal and higher LP - so algorithm multiplies data in the image to improves signal or makes it equal to that in sub shot directly overhead, and subtracts some DC offset from whole image - to make background equally bright. After that is done - you virtually have two same subs - except for SNR. One shot close to horizon will have different SNR than that shot over head. Depends how you look at it. Data will be lower quality, and sure - if you can shoot higher quality data at the same time - then you are wasting your time shooting lower quality data. But you can't do that - at least not of the same target - as target is close to horizon and you can't do anything about it. There are however - several things that you can do. 1. Don't shoot target that is close to horizon. Wait for it to be properly placed for duration of the session. Decide on your free imaging time. Say that due to commitments you have free imaging time between 9pm and 2am. That is 5 hours of imaging time. Choose target that will cross meridian smack in the middle of that imaging time - or at 11:30pm. This ensures that you get the best SNR for your subs on that target. You may modify this if your LP has strong bias in one part of the sky - then try to balance - try shooting target when it is in darker part of the sky - but not overly low down at horizon. 2. Work in parallel on several targets. Maybe you have 6h of imaging time per night and few spare nights in a row? Image 3 targets - each for 2h when they are highest in the sky. You need to select targets that "follow" each other in the sky - and work thru them as they approach highest point in the sky. In practice I try to image target that is best positioned taking into account LP levels and position in the sky. I also choose nights of good transparency for imaging. I've never gone for multiple targets on the same night spread over couple of nights - but that is because I haven't had permanent setup so far, or commitments did not allow, but with obsy - I'm certainly going to go that route.
  5. In theory, stacking algorithm should take care of that. Part of stacking routine is sub normalization - which should account for at least two (if not more) things that happen: 1. Signal is weaker at lower altitude. This can be significantly so as there is more air mass 2. LP is often higher at lower altitudes - again related to number of air mass and overall air density We can write this in form of x*Signal + LP, which is linear equation. Multiple subs allow for LP part (DC offset) and x (multiplicative constant) to be equalized - that forms stack compatible frames - one with same signal strength and same LP level. Advanced versions handle LP gradient as well - they "normalize" LP gradient - or make it equal. As target moves across the sky - FOV rotates with respect to horizon, so any LP will rotate as well, as it is often fixed with brighter parts being closer to earth or in direction of major LP source (like a city). Above happens if you mix subs from separate nights as well - one night can have poorer transparency than the next - so subs will be impacted in much the same way. What remains is to stack subs with different SNR efficiently - and this is where there is much room for improvement in current software. Most have very rudimentary way of handling different SNR - like per sub weight. But that is far from optimal.
  6. I have to be more specific about this statement. If we accurately measure (somehow) distance between stations and space them equally - then we can use above method to measure speed of light (although not very efficiently - one might call it brute force method). If we don't measure distance between stations and use trigonometry to ensure that distance is the same but unknown - then we can't measure speed of light - but we can ensure that it is equal in different directions.
  7. Well, above does not actually measure speed of light in any direction. It is intended to show if speed of light is equal in both directions or not. Showing that it's equal and using two way measurement combined ensures that we know speed in any direction (as it is the same) - it is like measuring one way speed. There might be a catch in setup though - how do we determine if Stations are equally spaced. For great enough distances we must use speed of light - at least it seems. I've also came up with modified setup that does not need any such measurement - but rather simple trigonometry. Stations won't be aligned on a line - but rather at corners of equilateral triangle (for simplicity). Middle station will use mirror to bounce lasers towards respective stations - and all will use simple 60 degree angle as alignment tool. Actual length of equilateral triangle is not important either if we perform series of tests with ever increasing delay between gate openings.
  8. Do you have your OAG properly focused? If you are out of focus - you won't see any stars. You can try very long guide exposure (do a half a minute exposure with OAG camera) - just to see if you can spot very faint doughnuts - those are out of focus stars
  9. t0 is just time at which Station 3 sends sync pulse, it's not really important as it cancels out in most of the calculations. Laser light can be continuously on. It will either be blocked at gate at Station 2 or gate at Station 4. Gates must be opened in just the right time for short duration (think of your example with spinning notches - open gate = aligned notch) for light to pass thru to the other side. No need to track when the light pulse starts or ends - it is just a detection of light that counts - either it passes at some moment or it does not pass at all.
  10. Well, if I do the math - it turns out that it will only work if speed is equal in both directions a - some part of speed of light - speed of light is set to 1 and distance is set to 1 between stations (for ease of calculation), so in one direction speed of light is a, while in opposite it is 2-a (say 0.8c and 1.2c - it must add up to 2c so that result of measurement of speed of light in round trips is always c) t0 - initial time ta1 = t0 + 1/a (time of first gate at Station 2) ta2 = t0 + 1/a + dt (time of second gate at Station2 - after some pre agreed time dt) tb1 = t0 + 1/(2-a) (time of first gate at Station 4) tb2 = t0 + 1/(2-a) + dt (time of second gate at Station 4) i1 = tb2 - ta1 = t0 + 1/(2-a) + dt - t0 -1/a = 1/(2-a) - 1/a + dt (interval of time for light to travel in one direction) i2 = ta2 - tb1 = t0 + 1/a + dt - t0 - 1/(2-a) = 1/a - 1/(2-a) + dt (interval of time for light to travel in opposite direction) i1 = 2/a = 1/(2-a) - 1/a +dt => dt = 2/a + 1/a - 1/(2-a) = 3/a - 1/(2-a) (at what dt will light make it in one direction?) i2 = 2/(2-a) = 1/a - 1/(2-a) + 3/a - 1/(2-a) (will it make it in other direction in same time?) 2/(2-a) = 4/a - 2(2-a) 1/(2-a) = 2/a - 1/(2-a) 2/(2-a) = 2/a a = 2 - a a = 1 Yes, only if a=1 and 2-a also = 1 - or speed of light is equal in both directions.
  11. Yes of course - both speed of light to be detected and sync pulse change depending on direction - question is does this cancel out?
  12. Ok, here is another experiment - let's see if we can spot the problem in this one: Setup consists out of 5 stations all on the same line and equally spaced (well stations 1 and 5 need not be, but for simplicity let's say they are). Station 1 and 5 only need to do two things: a) shine a laser one towards the other b) record if laser light is detected Station 3 needs to only send light pulse periodically towards stations 2 and 4 (sync pulse). Stations 2 and 4 need to do the following: They each have two gates, upon detection of sync pulse they each need to open one gate for short period of time and then close it. Then they both wait agreed upon time period and repeat opening for short period of time with second gate. Only difference is which gate is opened first. Gates are of course aligned with the line and sit in laser paths. Question is: Will both Stations 1 and 5 either detect or not detect laser light depending on agreed upon time if speed of light is asymmetric?
  13. Well - then it is easy to explain why it is two way experiment after all. Rotation will always tend to "close" the gap. Smaller forward speed will close the gap more - which will imply higher return speed (straighter path means light must travel faster to get thru), and vice verse - higher forward speed will impact gap less thus return speed can remain smaller.
  14. Yes, you are right. Rotation direction must match order of notches for light to pass in one direction. This also rules out symmetric case that I was talking about - as this "light filter" seems to be uni directional.
  15. To be honest, I'm slightly confused about all of it and here is why: At first I thought that forward and backward speed "add up" to total two way speed - but on closer scrutiny - it does not seem to be the case. Let's analyze what is happening. In two way speed experiment - two speeds need to add up to constant value, so if speed in one direction is greater - it must be smaller in opposite direction, right? (or otherwise we have same speeds in both direction). That is the case we are claiming we can't distinguish. If we start rotation of machine on one end, we can distinguish two cases - we can either rotate it clockwise or anticlockwise. These two cases differ in what sort of effect they produce. One of them will be reducing angle and other will be enlarging angle between notches due to limited speed of light (limited propagation speed of rotation due to EM interactions between atoms). If we rotate so that angle reduces - we will measure higher return speed of light. If we rotate so that angle increases - we will measure lower return speed of light. In either of the two cases it holds that angle difference will be larger if forward speed is smaller (start of machine will have more time to spin while rotation propagates to the end of it). We also have following parameters that we can use to play around with setup: Driving of rotation does not need to be on one of the ends - it can be placed arbitrarily along the shaft of the machine - and if we have different propagation speed in each direction - there will be difference in how each notch leads or trails depending on all of that. We can also do "symmetric" setup - we can use same machine setup, driven from the center (or from each of the sides as a control) with two light sources and two detectors that measure speed of light in each direction. We would then need to explain why is angle from left to right larger than from right to left if there is asymmetry in speed of light in these directions. Finally - what we haven't considered is fact that notches are in non inertial frames of reference, but I can't imagine how could that be important as we can reduce rotation speed arbitrarily by precision notch separation and by increase in machine size.
  16. Ok, I think I understand where "the catch" is - in the case of above experiment. It might not be so obvious because we have what we perceive as "single piece of equipment" and our brain tells us that for one piece of equipment "same conditions" apply - but if we rephrase experiment in different way - it becomes obvious that we need to do slow clock synchronization. Say that instead of rotating discs with notch in them - we simply have mechanical shutters that are independent - but are set to open "at certain interval, one after another". Such devices would need to be synchronized to actually open / shut in sequence with certain delay viewed from our frame of reference. Implicitly, such synchronization happens when we start to spin our device - that motion is in fact not transferred instantaneously from one end to another - it is after all mediated by same mechanism of EM force between atoms in the machine, so it takes some time for motion to arrive at the other end.
  17. I don't think there is a problem. Nothing involved in measurement is relativistic in its nature, is it? Length of assembly can be measured when everything is standing still, so can angle, and angular speed is not super fast - it is quite easily achievable 500hz is 30000rpm. At those speeds and at those distances - we can't really speak of change of order of events - so if we say that photons enters one slit and later exists second one - that will in fact be order of events in frame of reference where everything is stationary (except photons whose speed we measure). Detection also does not depend on anything special - either we detect photons if length, angle and rotation speed match speed of light - or we don't.
  18. I think this is very interesting idea. Varying the angle, rotation speed and / or distance can be used to "tune" wanted speed - and then we can use that to either have light pass thru or nor - which is confirmation that we have good speed selected. Not sure what the margin of error would be for such measurement - but it could certainly rule out significant anisotropy. We don't need star to do it - we can use laser / led diode - and this means that we could rotate setup in any direction.
  19. Yep - quantum thing. Momentum of photon is h / lambda or h * f (depends on frequency - higher frequency - higher momentum - that is why x-rays and gamma rays are dangerous - high energy / momentum) Energy of photon is given by h * c / lambda or h * c * f Energy and momentum of photon are related as E = p * c
  20. Round stars are in fact consequence of equal random error in both DEC and RA. On a mount like that - it is sign of good tracking performance - as equal DEC and RA error must be down to seeing if mount is stable and rigid. DEC axis should not move at all and therefore DEC error due to tracking should be 0. If both DEC and RA errors are equal - then we can conclude that RA tracking part is also very close to zero. What is causing bloated stars is optics + seeing. On low quality mount - it does not need to be indicative of good guiding performance. With low quality mount strong corrections in RA can cause vibration in DEC as well if frame is not rigid enough - which can result in equal random error in both axis - which will result in round but bloated stars. In general - star FWHM is influenced by seeing, optics and tracking / guiding performance (or error in both axis which does not need to be equal nor truly random - as you mention trailing stars)
  21. Yes Any single wavelength will be better captured on mono camera.
  22. If you measure diagonal of your screen and divide that with number of pixels on that diagonal (~2202 for 1920x1080) - you should get one of standard screen resolutions like 72ppi / 96ppi (pixel per inch) You have what you have in terms of pixels in your image - say that you have 4144x2822 px image from that sensor (although it won't be that after stacking / processing and cropping stacking artifacts) - you can print it in which ever resolution you want to print it - say you want to print it in 300dpi Then resulting image will be 4144/300 = ~13.81" x 2822 / 300 = ~9.4" If you print it in 150dpi - then you will have twice as large image - or 27.6" x 18.8" So what is the resolution you should print it in? That depends on intended use of the image. A bit of math here follows. Humans usually resolve 1 arc minute, so we can use that as pixel size. What distance will the image be viewed at? Will it be hand held and viewed at reading distance? That is about 20-30cm. Will it hang on the wall? That is about 50cm Will it be poster on the street to be viewed from 2 meters away? In each case - we can determine print resolution that is needed for image to look good. Let's take wall example - 50cm At 50cm, 1 arc minute is 0.14544 mm wide - or 25.4 / 0.14544 = 174 pixels per inch. In this case - I would say that 150dpi is sufficient. For half that distance, or hand held version - you'd want to use 300dpi (that is why those two are printing standards) For poster at 2m away - you can actually use only ~50dpi and it will look good at that distance.
  23. To be honest - I had not heard of the sad news and I have no idea if it is still in production and/or supported. My level of interest was only up to "selecting" it as possible upgrade, but I'm really far away from upgrading the mount (maybe a year or two - need to first finish the observatory).
  24. It looks like it indeed is! They probably wanted to make eye positioning easy with this eyepiece so they artificially shortened long eye relief.
  25. That is quite odd. 32mm Plossl has very generous eye relief - around 22mm. I never had problems with eye relief in such way. I did have issue of having too much eye relief when I use it with Mak102 (something about amplifying secondary leads to increase in eye relief), but never not enough of it. Granted - I don't wear glasses when observing.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.