Jump to content



  • Posts

  • Joined

  • Last visited

Everything posted by Whirlwind

  1. This is worth watching especially if you have AMD processors as W11 appears to hit them very hard.
  2. Both time and space are an interpretation of the universe within our limited perception. Both space and time are linked to fundamental constants in a way we don't fully understand yet. It is much more likely that the universe is tied to a set of fundamental constants (rather than our limited perception of space/time). As an abstract example (rather the true science) - you cannot travel faster than the speed of light but the way we perceive time does depending on your frame of reference. In your own frame of reference you age normally, however your perception of the rest of the universe is that time slows down. This effect accelerates as you approach the speed of light but can never quite get there. However you will see things not in your frame of reference as traversing time differently. Time can hence become infinite from a frame of reference if you could ever achieve the speed of light. From the frame of reference of a photon 'time' could very well be zero (it never see the universe age). Which thinking about it means a photon perceives being everywhere in the universe all the time...There's literally no time or space to a photon... I think
  3. There may be other causes. I think this came up in discussion before (on a Chroma filter IIRC) and I postulated that it could be Newtons rings formed from a spherical surface and touching surface next to it. For example I imagine the coverslip and the microlens of the CMOS chip. It is wavelength dependent so may only be seen in narrowband (as broadband the effects at each wavelength would merge). This should be testable as the equation is well known. The alternative I more recently thought of was perhaps because of how interference filters are designed (Interference filters - Electronic Imaging - Bedford Astronomy Club). From my understanding narrowband interference filters can have several layers of material in them. If some of the light reflects then you can get light 'bouncing' back and forth between the layers if they reflect some of the incident light. This could result in rings of slightly defocused light reaching the CMOS/CCD. This would be more dependent on the thickness between the material in the filters so will be harder to determine as companies are less likely to release this information.
  4. How about the Dumbbell Nebula as that has a distinct OIII component. Have you checked that the filter wheel is cycling every position correctly. For example if is thinking it has 7 positions and is an actually an 8 position wheel etc. Should be able to check this in daylight. If travels in both directions you want to check it selects the same filter going backwards and forwards etc.
  5. With the exception that colour vision changes with age. Our lens naturally yellows as we age but also there can be degenerative diseases that can affect our perception of colour (e.g. Changes in Color Perception | MacularDegeneration.net). As such you may well find that some people 'boost' the contrast of their images to compensate (unknowingly). I would expect that is some cases colour contrast is boosted less by a younger person compared to an older person for some colours. As such you can never really use a person's eye as the low resolution spectroscope because each one is likely to perceive the colour slightly differently - in fact arguably we have no idea how anybody other than ourselves perceive colour because you can't plug yourself into someone else's brain. To get a precise colour position you'd need to use specific energy levels. Really though it doesn't really matter what the colours are. They are more representation of the data and what is going on in that location. As long as the data itself is not being altered colour in the form we know it is just a perception - an alien species may not see it in the same way (lets say they see colour by change if brightness gradients) etc. You can make a star forming region red, blue, or luminous yellow as long as it is consistent it is still providing the same information. In the same way that narrowband colour representation is usually 'wrong' but still provides real information on the object.
  6. There are likely several factors here. We have effects like having differently calibrated monitors so colours can look subtly (or even widely different). Then we have our own limitations - there is obvious colour blindness but also as we age I believe we tend to see colours differently. As such as you age you might naturally adapt images to enhance elements so they are more vivid in certain colours but to the individual because their eye is filtering some colour it is more natural. The reality is that the only way to do 'true' colour is to agree a colour for each temperature of an object through spectroscopy and then when you have sampled enough objects in the image apply that colour cast to the image. However it does mean that with any imaging must come a lot of spectroscopic data.
  7. Ooof, well given the problems in the UK transport industry at the moment I'd expect it sometime around the New Year....
  8. The ADC isn't that important between CCD and CMOS overall. For bright photometric objects either should be fine especially if it is for 'fun'. The bigger issue (which comes more prevalent with faint targets/where changes are small (e.g. exoplanets)) is that you want a consistent noise on a frame by frame basis. Here CCD currently still have the advantage. CMOS have ADCs on their individual pixels whereas CCDs just have one. It means that the readout from a CCD is consistent so when you create calibration frames the noise represented in them is the same as the actual science image. In CMOS you can get variations on a pixel level (the proverbial walking noise effect that can be seen sometimes when stacked). You can remove this by dithering in imaging but for photometry you don't really want to change pixels (ideally you want the target sitting on the same pixel(s) all the time as it keeps your noise values as close as to the same as possible).
  9. But it's not zero and this is where the problem lies. Most astronomy filters have an OD value of 3 or 4 and for some science 5-6 (it's likely impossible to create one with no light leak). As such if you have a very bright source (the flu tube) compared to the background then in an OD 3/4 filter enough is likely to be detectable. On the other hand stars are much fainter in this regard so practically you may not need an OD 5-6 filter and is probably overkill for what we use it for.
  10. This isn't really a surprise. Most filters have some out of band transmission (this is usually referenced as their optical density (OD) value). Even the best filters transmit some light out of the desired passband even if it is a tiny fraction of a percent. If you put the filter in front of a very bright source then you will inevitably see the leakage (a fraction of a high luminosity can still be a high value etc). Even Baader don't claim a perfect filter (you can see some leakage in their filter profiles). Nevertheless for RGB filters you use these primarily to obtain colour data of a broad ban image so some limited leakage can be OK as it isn't the dominant component. Also you can expose on G2V stars to get a 'correct' colour balance (for our eyes) anyway which would remove any leakage impacts.
  11. I'd agree. I've always thought that an imaging area per "/pixel is a more useful representation as to how 'fast' a system (as above) as there are other ways to increase the speed of your system through simple binning (excusing any noise differences from doing so)
  12. A well figured, well collimated substantially larger reflector (including RCs and Cass's) should always be able to provide better views than a smaller refractor. I don't think this should be in doubt. Unfortunately because of their general size and that is easier to make good reflectors relatively cheaply means that there is a general dearth of premium reflectors on the market (both from an optical and mechanical standpoint).
  13. I think this is a bit of a disingenuous comment - no one is stating "just stay with CCD", it hasn't even been mentioned - this is about whether to consider that for the purpose of the usage (ie. high usage at a remote site) that obtaining a version of the same CMOS sensor in its industrial form is a better option than a consumer version because the manufacturer stands by it having a longer lifespan before it fails. Whether CCD or CMOS it is better to have all the evidence so individuals can make a judgement as to whether the risk is acceptable. That some of the chips have a restricted usage is a valid concern where the camera may be used over a longer period and beyond the recommendations of the manufacturer. By being aware of the risk you can both ask further questions and interrogate whether the level of risk is acceptable for the activity you are intending to use it for. The alternative is that we don't say anything, not mention it and if the worse did happen shrug your shoulders and think "well I knew that, but I couldn't be bothered to mention it" (or worse "I'll just be shot down for saying it")? How does that foster a positive astronomy community if the immediate reaction is to attack any person that might be highlighting a potential risk for consideration?
  14. This is true but, they tend not to be used aggressively by consumers who likely only take a relatively short number of shots a year. In addition they are usually in high light conditions so exposures are in thousandths of seconds with less stress (no active cooling etc). As such 300 hours of photos is a lot.... On the other hand for astro imaging at a remote site wouldn't take that long in a year to get to 300 hours. Now 300 hours is only what Sony guarantee. They won't necessarily fail at this point but the company will have made a judgement as to the probability of any one failing before this point and whether it is replaced under warranty. We also don't know the distribution of the failures. It may well be that if you take 600 hours per year of images then 50% will fail or it might be 1600 hours per year, this will be internal knowledge only available to Sony. However, it is worth recognising a potential risk especially when you are looking at a £4000+ camera. I can't say I don't worry that in a couple of years we will start to see increased numbers of cameras 'failing' - but we may not be there yet as CMOS is still relatively young as a product in astroimaging.
  15. I don't think the issue is between grade 1 and grade 2 in the way CCD sensors operated (which I'd be quite happy to say isn't a problem for astroimaging). There is generally little consumer information about the differences but I do note this on Astrographs site:- "Similar to area scan sensors, images produced by consumer grade sensors, provide a fast, high-resolution capture of the entire field of view. The main difference between them and industrial grade ones is that consumer sensors have a shorter life span, a shorter mean time before failure (MTBF) and, as a result, may cost much less than an equivalent industrial grade sensor. If you have a product that needs to be guaranteed for a long time, have a consistent image between each vision product, or if you need it to last for several years versus just one year you will want to consider an industrial area sensor instead. If you are purchasing hundreds of thousands of sensors per year and you have satisfied the other requirements, then consumer sensors may be the right choice for your application" (QHY600M - Why? (astrograph.net)) This principle would align with Sony placing an hours per use limit on consumer grade sensors (basically if it fails and you've exceeded this usage then there is no warranty on the chip etc). For the majority where you get few decent nights per year in the UK this limitation is unlikely to occur (barring some spectacular weather) before any natural/extended warranty expires. However at a remote site where you are imaging night after night it is definitely a consideration as you wouldn't want a chip of this cost to completely fail in a (relatively) short period of time (which Sony to an extent are artificially extending by limiting the usage per year) when the old CCD is rolling along nicely (despite some cosmetic defects).
  16. Given the amount of time you are likely to image you may want to check what grade of sensor you get in the Atik. I've seen QHY have provided two versions before but also note Moravian now offer this sensor (plus the 2600MM equivalent) and note this on their website (C3 Series CMOS Cameras (gxccd.com)):- Assuming there is nothing untoward in the advertising here - any long term imaging at a remote observatory may well be worth considering the industrial grade as I assume that any operation above 300 hours may void the warranty if it goes bad (in effect more than 30 nights assuming 10 hours per night but excluding darks and flats). That Sony have to limit it in this way does make me slightly wary on the longevity where people get lots of clear nights (hardly ever likely in the UK mind!)
  17. The FLT91 is only just out so there are unlikely to be many real world users out there just yet. Nevertheless in theory it does guarantee a certain quality of lens (and you can ask for a report at an extra cost - which might incentivise to be given a better one, especially as an earlier adopter). In terms of practical use I always found their more integrated flattener/telescopes more easier to use - especially the variable adapters which gives a bit more scope to play with back focus. I also think a lot of the earlier mechanical issues are now resolved (poor focusers etc). They probably are going to be much of a muchness with many similar triplets which are either from the countries of China (Skywatcher) or Taiwan (WO). I tend to steer towards the latter (or Vixen/Tak being Japan based) but that's more of my own personal political sensitivities.
  18. I think this might all depend on what you prefer to image. If it is nebulae / clusters etc then a wider field is better and would look at the FSQ85/106 with your camera. If you prefer galaxy imaging then you probably want a longer focal instrument and something like the TSA120 / TOA130. You can also get reducers for these to help with imaging nebulae. Honestly I would have though the TOA130 (with an focuser upgrade) would be the lifetime scope because there is unlikely to be anything really better that would be noticeable.
  19. The difference is how it is being interpreted. You are using it as evidence that it does exist. The papers demonstrate that mathematically the laws we have in place cannot differentiate between the two possibilities. Therefore this suggests that there may be something about the laws that needs to be refined. The observations suggest that there is no difference in whether you look left or right. Whether that is experiments that measure radiation pressure, to spectrum of emissions and so forth. Mathematical formulations are just a different way of describing the same principle but it doesn't change fundamental parameters. We know Newton's theory was wrong from measurable evidence (e.g. Mercury's orbit) whereas relativity does resolve this and we continue to test it all the time. We know something is 'wrong' as we can't combine relativity with quantum mechanics but observationally both fit the theories. This is why we test ever more refined experiments (e.g. CERN, gravitational wave detectors). What we didn't do when we had a problem with Newton's laws was to arbitrarily adjust some of the parameters to make things 'fit' to the observations. We could have done this but we would not understand the universe any better (and in fact science would stagnate). This is why just arguing that you could adjust the factors in an equation between c and frequency isn't sound. It's in no way testable, has no physical rationale behind it and drives us down a dead end. As I said before this is the approach we took when we assumed the Earth was the centre of the universe. We devised ever more elaborate ways to explain the patterns of the orbits but with no physical reality behind it. It failed because it could not predict the next object found.
  20. That's not what we are saying at all. What is being said is that the approach you are taking to justify the rationale is the same principle used by people to justify the earth is flat (and also the same approach to make the Earth the centre of the cosmos hundreds of years ago). What you are doing is taking a principle and trying to use reasons to justify the empirical evidence which is the same approach. In effect what you are doing is trying to prove the principle correct, whereas science is about trying to prove your theory incorrect. Any theory can be *potentially* correct but it has to be testable (and not just altered to fit what we observe). The paper referred to doesn't claim that c varies in different direction it just claims that it in principle special relativity doesn't prevent this from occurring. That isn't *proof* that it actually is. To do that the theory must be testable observationally that would differentiate the two possibilities. You have to prove that why the hydrogen alpha line looks the same regardless of which side of the sun we are on and be able to explain not just the why the frequency has stayed the same but why all the observational physical processes we see (down to basic experiments like a double slit experiment can be explained. Energy levels are also a fundamental of Quantum mechanics. If you change energy levels then the way objects would bond, interact and their forces would change (molecules might be more or less tightly bound for example). Such changes to interactions would be observable.
  21. Yet this is exactly the problem. You are just claiming that you can do this without any empirical evidence. We can all do this, as noted before I can likely come up with some reasoning as to why stars are actually glowing marshmallows on the other side of the galaxy. It sounds unrealistic but I can just keep coming up with ever odd reasons for any argument against the idea but not put forward any way of testing that idea. It is not scientific method to respond to a criticism of theory by inventing something else to explain the criticism away. For example at a basic level, if you change c in different directions then the energy conversion between mass and photons changes in different directions. If energy changes then the quantisation of energy changes which in different directions means that frequencies change for the same energy levels. As such hydrogen alpha emission/absorption should be different in different directions. This is simple to measure from a spectrograph. If energy quantisation levels change then the way that gases act in high g environments changes - and we have standard candles for these sort of things (White Dwarfs). All of these observations would be slightly out in one direction to another if c is different in different directions. Just because c = fλ does not mean you can fiddle with c and not have a consequences across a vast number of other fields. If you don't produce a testable theory then the proposal is not better than a 'flat earth' theory because the same principles used by some to try and explain a flat earth are being used here. How about another argument against this relating to GPS satellites. If you stand on the surface and look to one horizon then a GPS would transmit its signal at one speed and then as it travels across you horizon then it would be transmitting in the opposite direction so c would be different. The 'clock' has not changed only the direction. Any change in c would be noticeable as you would then see a slight shift in position - so the question is why don't we see this as a changing c should result in a relatively circular and consistent pattern of location changes despite the object being 'bolted' to the ground.
  22. I'm sorry but this argument simply isn't science. You can create any number of arguments to argue against the evidence to hand (e.g. the spectroscopy doesn't work because you change frequency / length) and hence you end up with increasing divergence of physics just to 'prove' that a theory could be correct. This is the same as saying that all stars on the other side of the galaxy are made of bright glowing marshmallows because of any number of concoctions that can be thought up. It *could* be possible, because we can't physically check but all evidence suggests that it is not the case and hence an experiment that distinguishes the two possibilities. To argue your case you need to provide a test that would definitively test or disprove the arguments that are brought forward. You have two here - one that spectroscopy measures wavelength/frequency which are related to c; the second that lasers would not work in the way they do if c changed in different directions. You now have to put forward tests that break these experiments but at the same time still allow c to change in different directions. Otherwise we will just go round and round.
  23. The standard back focus from the Vixen reducers is 63.5mm (confirmed here Vixen FL55SS fluorite apochromat with Flattener HD and Reducer HD (skypoint.it)). If you are just using the flattener then you get an extension with the HD Reducer set which is 76mm long (Vixen Flattener HD Kit for FL55SS | Vixen) so given the threads that would give about 130mm to play with when just using the flattener. When considering attachments the M60 adapter is less common to find. Vixen producer a 'rotator' that also converts to a standard t thread thought TS optics also do some other adapters to M68 and M48 I believe.
  24. Yes, sorry I had seen your comment. I was trying to produce an experiment that could be tested without the need for measuring the actual wavelength/frequency directly and have something that would fundamentally 'break' without measurement if the speed of light was different in various directions (in this case the excited atom).
  25. The problem with articles like this are that they are only really pseudoscience. They postulate something but provide no tests to prove it is incorrect. I would however propose that lasers can disprove this postulation. Firstly lasers are based on exciting atoms and using mirrors the same wavelength/frequency of light to de-excite the atoms in a controlled way. The atoms have specific energy levels because the electrons can only maintain certain energy levels. As a laser can use a mirror to induce this then the light will be travelling in different directions and hence the assumption would be it would be faster/slower in one direction. However, wavelength and frequency are linked, so a change in the speed of light would change the frequency/wavelength of the light. So what you should see over longer distances is a divergence and splitting of spectroscopic observations in one direction compared to the other (which is a lot easier to do than in white light because we are talking about specific discrete frequencies). Over long distances this should be measurable. In addition as the energy of the light would be different in the two directions then only in one direction would the interaction between the atoms and the light be most efficient (as the atoms have discrete energy levels). As such the flux of the laser would be different in the two directions (as one direction has induces energy changes better than another).
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.