Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

JamesF

Members
  • Posts

    31,960
  • Joined

  • Last visited

  • Days Won

    182

Everything posted by JamesF

  1. Actually, with apologies for wandering off on a tangent a little, a suitcase dob is something I really want to think about. The childrens' swimming club are doing a "club families" trip to Lanzarote (I think -- one of the islands anyhow) in October 2016 and it's tempting to go, but I'd be gutted if I went and didn't take a scope of some kind. As it's all school-age kids in the club it would be really nice to "spread the word", too. So a dob that would go on an aeroplane fairly easily would be an excellent thing to have. Given that I have a year and a half perhaps that should be my first self-made mirror project. In fact, I've just looked at the BA website (as an example) and the the maximum hand luggage size is 56cm x 45cm x 25cm. I wonder if it might be possible to make an 8" "cabin baggage" dob? James
  2. Certainly inspires me to give it a go (and I have a 14" glass blank waiting), but I want to try something smaller first I think. James
  3. Certainly does. It's looking absolutely beautiful there. James
  4. Very much enjoying it too. I'm most impressed so far. James
  5. As has been posted above as I was writing, I'm in agreement. It doesn't seem to make sense that the backfocus should reduce. James
  6. That would give you a larger image and change the point of focus, certainly. James
  7. In a colour camera each pixel corresponds to a particular colour in the image, red, green or blue. For pixels in red positions, for example, there is no green or blue data. It's derived from adjacent pixels of those colours and as such may not be entirely correct. Effectively the resolution is decreased slightly by doing this. Also, in order to make the pixels respond to the colours required you start with a mono sensor and add a filter for the required colour to the front of each pixel. That filter causes some loss of light and reduces the sensitivity of the sensor. With a mono camera you use a filter in the optical train for each of the three colours, so every pixel records data for each of the colours and you get no loss of resolution. Also, the external filters tend to allow more light to reach the camera sensor, so overall the sensor is more sensitive. Of course the (arguably) negative side of using a mono camera is that you have to take three sets of data; one for each of red, green and blue and then combine them afterwards to get a colour image rather than getting all your data in one hit. And in the case of Jupiter you'll be trying to do all that data collection in the two-minute period before any distortion in the image due to the rotation of the planet becomes apparent. And you need to buy the filter wheel and a decent set of filters, which could easily cost more than the camera. But... the result of using a mono camera should be a much better image than is possible from a colour camera of the same sensor size and resolution. James
  8. The 120MC takes a bit of time to get the hang of, but it's capable of good results. This is from the same camera with my 127 Mak: James
  9. Consider: The average user of the video mode of a consumer grade compact or DSLR camera is interested in the overall result. They won't be too fussed about noise or a bit of lost data unless it affects the video as a whole. They're certainly very unlikely to go pixel-peeping individual frames of a video. Manufacturers know this and take advantage of it to allow them to increase the resolution or produce frame rates that would otherwise be impossible with the same kit. When you want to use a camera for planetary imaging however, every pixel of every frame counts. You're after the most accurate record of the light that hit the camera sensor. That's somewhat at odds with the situation above. If two adjacent pixels are different colours, even only very slightly different, you want to know. You don't want some camera firmware deciding that they're close enough that in a 25fps video no-one will notice if they're made the same colour to reduce the required storage space for the frame. James
  10. "Proper" CCD (or other planetary imaging) cameras give you the data read out of the camera sensor unit. They don't mess about with it to reduce the data size or anything like that. As I said a few posts back, if you had the raw data for a 1920x1080 frame at 30fps for two minutes you'd be looking at 7GB-ish of data. That's exactly what a dedicated camera would give you in that situation. James
  11. I'd turn image stabilisation off. Not sure about focal length I have to admit. I'm not at all sure how that will work out. It may work in your favour if you use something rather shorter than you did for the image you've already posted. James
  12. Could be. Does the manual not explain the difference between the two settings? James
  13. If you look at things the other way around, a raw (ie. still in bayer format as it came off the sensor) colour video at 30fps and 1920x1080 resolution of two minutes duration should be about 7GB even if it's only 8-bit colour. I agree with Chris about the onion rings. I think it's an indicator of lack of dynamic range in the data. You probably need more exposure time per frame though it's entirely possible the camera will not do that for you. They're not really designed with planetary imaging in mind... James
  14. H.264 is MPEG4 I believe. It's almost always lossy compression too. Unless you can find out exactly how the camera firmware works and decide what format you want from that I think you might just have to accept that you're never going to get stunning results this way and work on the things you can change, such as accuracy of focusing James
  15. AVCHD is, as far as I'm aware, layered on top of MPEG. If you take a raw video frame, convert it to MPEG and then convert it back you won't get what you started with, so data has been lost. JPEG is poor in two ways. First it's 8-bit only, so if you have more than eight bits of data they'll get thrown away. Second it's lossy compression. In my experience MJPEG is generally used where there's insufficient bandwidth for the full frame even with lossless compression, so the raw frame to JPEG conversion will be lossy too. In fact, if you look up JPEG on Wikipedia the first sentence says "JPEG ... is a commonly used method of lossy compression". Thinking about the way that JPEG actually works at an algorithmic level and how stacking works it seems highly unlikely to me that JPEG images are ever going to stack as well as raw frames. If you think of stacking as a process for reducing the random element of noise then I think it might not be unreasonable to view JPEG conversion as actually modifying the noise in a manner that makes it harder to reliably remove the random component. James
  16. MJPEG is definitely not good for planetary imaging. JPEG is (sometimes very) lossy, so you'll struggle to get a decent image out of it. James
  17. I think that using AVCHD is probably not a good idea. Under the hood it's probably using lossy compression on the image data which is unhelpful. If you can get a lossless AVI format from the camera that would be great, but it's entirely possible the camera just won't do that in which case I'm afraid you're going to struggle to get decent images out of it. I do also wonder if the camera just isn't sensitive enough for planetary imaging. James
  18. If you think it would be useful though, there's a build-dmg.sh script that comes with the sources for oacCapture. By all means take that, rip out what might be useful to you and use it however you wish. It's often much easier when you can see how someone else has done it to start with. James
  19. If you want to use the OSX installer then you have to create a .pkg file, which is possible, but as far as I recall requires signing keys and registration with Apple etc. What I (and many others) do is to package everything as a .dmg disk image which means it's just a "drag and drop" job to install where you want it. However, that does mean that everything needs to be completely relocatable (because it can be installed anywhere) and it's "the Apple way" that all non system software dependencies are bundled with the package too. That includes any external libraries that aren't part of the core OS installation and suchlike (which also may need paths changing internally). That really seems to be what Mac users expect. I get the impression that in the main they're not happy about being asked to run scripts or do anything from the command line. James
  20. Sounds good. Unfortunately at the moment it looks as though I will be unable to help for a while. Unless it's possible to polar align without being able to see any stars whatsoever. For weeks James
  21. Aha! You star, Robin I upgraded the firmware, but no change. So I started fiddling with what I could in the non-ASCOM control program. The "Recalibrate" button moved the wheel. Doesn't look like a hardware fault. But it still wouldn't move on a reset. From the manual it looks like the "movement" button should pop up a dialog box that allows me to check the speed, but, oh dear! It's crashed. So I switched back to Linux and connected to the serial port directly and entered "SA" to set the speed to 100%. Then "R0" to reset the wheel and it works! Weird problem. I wonder if there's some sort of noise when the cable is disconnected that causes the speed to be set to zero? I did try to read the current speed in Linux, but before I could do a proper reset the firmware seemed to be giving odd results to commands and I couldn't get it. James
  22. Sending a cold reset (R0) command to the wheel I get: Restart Xagyl FW5125V2 FW 2.1.7 Initializing Calibrating ERROR 3 - Wheel stalled ERROR 1 - Initialize failed to complete I'm going to have a go at a firmware update after dinner.James
  23. Well, bad news to report. The new PCB seems to have failed in exactly the same way as the old one. It was working happily last night, but tonight will no longer drive the motor I've just emailed Xagyl, but at the moment I am not a happy bunny. James
  24. You do have to make sure the wires go back into the right holes, too Replacing like with like is clearly no bother. It turns out however that whilst the screen-printing on the V2 PCB suggests (with a little box in one corner of the marker for the connections) that the orientation may need to change, it doesn't. James
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.