Jump to content

vlaiv

Members
  • Posts

    13,265
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. They actually use two sets of 1D + temportal information to construct 2D+time, right? Does scattered light "freeze" frequency at the moment of scatter? I'm trying to figure out how chirp plays into all of that - it is probably combined with diffraction to separate different moments, but I'm failing to see how that would be possible if scattered light continued to change in frequency with time.
  2. Yep, that part is evident. There is one more detail, that I might have gotten wrong - it took 8 hours to "analyze" the recording?
  3. You are sort of right with that statement. There is only one exposure as far as readout goes, but clever mechanism allows for single row (1D spatial) signal to be shifted in Y direction by change of electric field to get multiple images spread in time. Not sure how sharp stair stepping can be achieved as it would require very sharp changes in electric field to create fast transition in deflection angle, but I guess it's possible? It does not explain this however: How can such device be used to record 3 dimensions? Two spatial and one temporal?
  4. @Ouroboros Just to be clear on above post: First is shutter speed of sub-pico second and second is having FPS that matches sub-pico second. I have no issues against the first. I just say that second is impossible.
  5. When you say this, does it mean: - you can record event that lasted sub-pico second with this type of camera or you are saying: - you can record event that lasts sub-pico second, every sub-pico second with this type of camera?
  6. Let me just contrast that with following argument (to be honest, I've look at streak camera and saw this: and did not bother to read the rest). So you want to capture light scatter from light propagating in some medium and you capture frames in succession and two frames next to each other are say 1 millimeter apart in sample that scatters). This simply means that whatever travels at speed of light can cross 1 mm between two frames. We have scientific sensor. It does not need to be large - let's make it rather small - say 3x2mm in size with handful of pixels. First frame is read out of those pixels and signal is sent out. That signal can travel at most 1mm before next signal - next frame needs to travel down the same line. We can't have any sort of multiplexing for readout in that case as we would need to put signals from many pixels at the same time in one wire - but we can't have that as whole mm is taken up by one pixel. This means that we would need to have wire for each pixel. Even if you do that - no recording device can record this information unless there is whole thing per pixel as at some point you need to do some multiplexing. I don't want to even start on issues like interference with such high clock speeds and multiple wires coming from pixels and ultimately - some part of system needs to move slower than speed of light. We have electrons in potential well of pixels and we have electrons in electronic device that records the signal. Those don't travel at the speed of light. In that context - streak camera with "sweep circuit" would need sweep to move faster than the speed of light in order to sweep necessary surface in needed amount of time (remember - we are dealing with changes that occur when light moves 1mm or less - how is sweep circuit going to move more than 1mm in time that it takes speed to move 1mm?).
  7. On the page you linked: So it's a bit like instead of capturing lightning as it progresses you capture this: and the then do what exactly to get the actual motion? You make things up ... Not science in my book ....
  8. Ok, here it is again I just again opened link and title is: 10 Trillion FPS and two guys stand there and talk about actually recording at 10 trillion FPS. How many pixels, at which bit depth? Someone care to calculate data throughput of that needed? Someone care to calculate speed of readout in electronics and how high clock rates do you need to have in order to do that? Then proceed to add the fact that electronics also works on EM field like light and that it takes time for electronic impulses to travel certain distance. With very clock speeds (like in Gigahertz range already) - you have issue with length of your wires. Want to go to terahertz speeds - how are you going to sync left and right side of your microchip when there is not enough time for signal to travel that distance?
  9. My first impression was that it is fake. I did not do the exact math, and to be honest, I just skipped thru the video to see what it is all about, but when they started talking about incredible amount of FPS - my gut feeling was to think along the lines: - in order to record smooth image at certain FPS - you need to have enough photons - if you capture that much photons in such short amount of time - it means that total flux must be extraordinary. - That much photons with photon energy at certain level - equals enormous amount of energy. Gut feeling "calculation" pointed out that it's just too much energy in light to be feasible / real.
  10. If I recall correctly, a prominent lens maker once said that any properly designed and well executed negative doublet does not introduce visible aberrations in the image. Not sure if there is need for "Ortho" barlow if above is true ...
  11. We really don't need to. We have enough understanding and have good working theory of color and light that we can be certain we can process even astronomical images realistically. That does not mean that we should do that and in most cases we need to "augment" them to show detail which would otherwise not be visible if we matched processing to what our eyes would see. We can however be selective in the way we apply that augmentation - for example we can simply elect to represent image as it would be if light was brighter than it is - either by having brighter sources, or sources being closer to us. All of that is not mandatory for making of nice image, but it is there should one like to utilize such an approach.
  12. Most professional astronomers do data reduction in such way. If you go to any server and download data - you'll get nicely calibrated and reduced data with complete workflow of how it's done - so that you know what sort of data you are using. So that is first stage - use well known and deterministic algorithms up to point where you have linear stack ready for processing. Next stage is automated as well in some cases - we just don't think about it. Every time you take your phone or your digital camera and do following: 1. Set auto 2. Shoot image 3. Download jpeg and/ or send image to printer That is what happens - predetermined, exact sequence of processing steps is taken to produce uniform looking image. Not only uniform looking - but also "correct" image. You will 100% agree that image that you are looking that is taken by digital camera is what you are seeing with your eyes.
  13. I think that days of hand figuring are long gone. It is all done by machines and I don't think that quality of figure is related to time anymore. It can be related to what particular machine can achieve, but I'm certain that it takes same amount of time (so huge part of cost of making item - except for the cost of material) to do different quality lenses.
  14. For me, there is clear distinction between the two. In case of stacking images - there is precisely defined mathematical workflow that has been proven to be correct. In second case - it's much like giving someone a list of numbers and asking them to guess which number comes next. Sure, it is educated guess (AI has to be trained) - but it is still a guess and as such - not 100% right all the time. When solving a mathematical equation, you don't say - here, this is solution, but I'm 87% confident it is the right one , If you apply correctly mathematical principles - you can be 100% sure you have the right solution. Since I consider astronomical imaging to be more than producing pretty pictures - even if that one step above is just to "document" what is out there (rather than doing additional measurements and analysis) - I value the workflow that does not utilize guesswork.
  15. I would not otherwise post this in dedicated threads regarding said scope, but I think I can post it here because of they way thread topic was phrased. Is it too good to be true? Well - consider this: You can't really purchase 80mm F/7.5 achromat for less than say 250 euro - but I managed to purchase said lens for 35 euro of AliExpress. I won't go into detail of how much other components cost - except to say that I'm confident that my DIY scope based on this lens will be less than 100 euro. Rest is just labor costs and profit margins. If someone decides to cut down their profit margin from say 100% to 60% - what you get is a bargain scope
  16. Stars are effectively at infinity so it really does not matter at all. You can have your lens mounted at any sort of angle with respect to rotation of the mount and at any sort of distance - as long as lens follows mount's rotation - you'll be fine. From what I gathered in this thread - you really have two options: 1. guiding 2. encoders First is more involved but less expensive, second is less involved but probably way over your budget.
  17. @JoeKitchen I think I understand your requirements a bit better now. I'm afraid that I will need to "swamp" you with a bit of technical stuff here, but as far as I can tell - you'll be able to follow it. First thing to get a grip on is pixel scale and there is simple formula to do that: pixel_scale = pixel_pitch * 206.3 / focal_length Where pixel_pitch is in micrometers and focal_length is in millimeters. Above will give you pixel scale expressed in arc seconds per pixel. In astrophotography, given the state of sky, you can expect stars to be between 2" and say 5" (that is arc seconds) FWHM. FWHM is Full width at half maximum and is measure related to roughly Gaussian shape of star image. You don't need to go over FWHM/1.6 in sampling rate as that leads to over sampling. To put this into context - let's take your current 6um pixel camera and say 250mm lens. Let's also say that you hit 4" FWHM stars in your image (actual star size will depend on several factors - sky conditions / how still or turbulent atmosphere is, then how good is your tracking and how sharp is your optics). Ideal sampling rate for such stars is 4 / 1.6 = 2.5"/px With 250mm and 6um pixel size you'll be at 6 * 206.3 / 250 = ~4.95"/px - so you won't be over sampled, you will be under sampled - but this is not a bad thing. In fact this is very good sampling rate for wide field. This is also number that is important for some other considerations - namely mount that you are using. Every mount, if not guided will suffer two main types of tracking error. DEC drift due to error in polar alignment and RA drift due to periodic error of drive. DEC drift can be calculated by using following calculator: http://celestialwonders.com/tools/driftRateCalc.html Say that you are off by 10 arc minutes from the NCP in your polar alignment and you are shooting target that is at 30 degree DEC. Your total drift during exposure will be 11.34 arc seconds - which is about two pixels if you shoot at 4.95"/px. This might be acceptable as it won't create star streaking - just a bit oval shape and it depends if that will be noticeable in your backdrop for set or not (it depends on how well resolved background will be). In any case - above will give you idea if you can shoot unguided for 5 minutes. Second thing that happens to mounts is periodic error. It is there because imperfection of gears in mount drive train. They are not perfect circles and as such (think a bit egg shaped) and they rotate at different speeds - or rather they have constant angular velocity but have different radii at places so transfer rotation motion at different rates at times. In any case - this periodic error is expressed as peak to peak periodic error and something called worm period. For example HEQ5 class mount out of the box has about 30 arc seconds p2p periodic error and its period is 638s (if I recall correctly). This means that mount will be on spot then lead a bit then return to correct then trail a bit then return to correct position in period of about 10 minutes and total deviation will be 30 arc seconds. Another way to look at it - it will "travel" in error for 60 arc seconds in 10 minutes so average RA drift will be about 6 arc seconds per minute or 1 arc second every 10 seconds. Mounts never have such smooth periodic error and sometimes their drift is less and sometimes it is more than average. This leads to people being able to take longer exposures than average drift rate suggests - but also that they need to discard some subs as well (depending on where on the worm cycle image was taken). Here is graph of such cycle for EQ6 mount: Above graph is used to create general correction - or what is called PEC - periodic error correction (average of several cycles of such leading / trailing - btw, if you read the numbers above mount has about 26" p2p). Now, you can compare periodic error and p2p with your working resolution. If you for example use 28mm lens with 6um then you'll be working at 6 * 206.3 / 28 = ~44.2"/px Above p2p error is simply insignificant in this case - as it all falls on single pixel - you won't see any elongation, however - for first case and 5"/px - you will see elongation in some exposures. Bottom line - compare your working resolution with what mount can deliver - in some cases you'll need to use PEC, but you can always guide, and even simplest guide setup (which consists out of guide scope, guide camera, usb cable and software running on your laptop that is connected to mount - about $300-$400 additional expense) will give you performance that will be more than sufficient for your working resolutions. I haven't even touched on the fact that sharpness of the lens will contribute to how big FWHM of stars in your image will be - but just to say that it lowers sharpness of the image and increases FWHM so it decreases need for high precision in mount and guiding. Hope above gives you some idea of what you should be considering and sorry for the long post.
  18. Problem is that we don't really understand what is requirement for images. I can imagine taking a very large format image and using it as drop in background for panning shot for example. In that case - image needs to quite a bit larger than the actual size of the shot (think scrolling background). On the other hand - how big stars need to be in that shot? Tiny - barely visible or larger? How much of distortion per star is acceptable? Is pixel level precision required or maybe it does not matter because image will be resampled for use and stars will end up tiny?
  19. Fact that you work as photographer works against you in this case. Many of day time photography concepts are useless and even misleading in astrophotography. Given that you work with 6um pixel size and use short focal length lenses - most star trackers will do good a good job, but you really need to think in terms of: 1. pixel scale - or how many arc seconds per pixel you want to have in your final shot 2. what sort of sharpness lens provide. Camera lenses are optimized for close focus or rather range of foci, while astronomical telescopes are always optimized for infinity focus. They give much sharper image that is only limited by physics of light rather than design of the lens itself. Do look up above concepts and learn a few things about wide field astrophotography as it will benefit you, but in order to get started quickly - I agree, look into small portable mounts that utilize strain wave drives. These will suit you the best. https://www.firstlightoptics.com/harmonic-drive-mounts.html Small enough for good portability Able to carry enough weight and Precise enough for what you need. I'm afraid that I can't recommend any specific model as I have not worked with any of them nor taken keen interest in their performance.
  20. That is because MTF has minimal effect when observing extended objects. MTF shows what sort of sharpness in transition you can expect when going from very bright to very dark and vice verse - think stars (how large airy disk is - transition from bright core to black background of space, or planetary detail - again transition from bright details to dark details - regardless what you are observing - craters or festoons). It is important at high magnification and most observing of extended objects (by that I think of DSOs that have surface brightness) is done on low power where differences in MTF between scopes are negligible. To address the second part - we can see Cassini division in 3" scope because it has nothing to do with diffraction limit. Diffraction limit is measure of how much we can resolve - not how much we can see. It is akin to expecting not to be able to see the stars - because stars have much smaller angular size than Cassini division, in 3" scope. We see stars that are tiny - just fine. What we can't do in small scope is resolve close binaries. We see them still - there is bright spot, but it is not quite clear if that is one star or two (or perhaps something totally different - a Sponge Bob shaped bright object in the sky ). Same thing happens with Cassini division - we can see that there is some sort of dark feature on bright background - but we don't have clue what it is - is it just one line or several lines close together or row of dancing monkeys that hold hands. Resolving power of the scope is about ability to resolve - and more aperture is needed to resolve smaller things - but to see contrast (and this is partly related to MTF as MTF dictates how abrupt that contrast change is) - even small aperture is enough.
  21. These two would be my choices: Budget: https://www.firstlightoptics.com/evostar/sky-watcher-mercury-707-az-telescope.html A bit more money and a bit more serious instrument: https://www.firstlightoptics.com/evostar/sky-watcher-evostar-90-660-az-pronto.html Both should be fairly easy / straight forward to use and look like telescope is "suppose to look like" They will also work for day time observation (although left and right side will be swapped if you use stock diagonal, but there are accessories like this one: https://www.firstlightoptics.com/diagonals/skywatcher-45-erecting-prism.html that will put eyepiece in more comfortable position for daytime viewing and also provide correctly oriented image (but for night time observation stock diagonal will be better choice as it will give you better image and viewing position will be more suited for objects high in the sky).
  22. Small update. Tested it today and it looks very promising. Even placed on a table with eyepiece (32mm Plossl) hand held about 555mm away from the cell it gives surprisingly sharp image.
  23. Some time ago I decided to purchase doublet achromat lens from AliExpress. It arrived in great condition, but I haven't managed to find time to do anything meaningful with it until now. Here it is mounted in 3d printed lens cell. Hopefully, I'll get aluminum tubing for OTA and dew shield in a few days so I can start assembling this DIY scope. Btw, lens is 80mm F600 one.
  24. Smallest exit pupil is very personal thing. I tend to be happiest with about 1mm exit pupil. There are several things that contribute to what people find suits them. Existence of floaters for example. Floaters tend to be noticed more below 1mm exit pupil. Another thing is visual acuity. Some people have sharper vision than others. Those that do - don't like too much magnification as it makes image look soft. Others on the other hand find that they can see more detail with increased magnification. It will to some extent depend on target too. Some targets look too dim with smaller exit pupils, but others have plenty of light and don't cause such issues.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.