Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

AngryDonkey

Members
  • Posts

    452
  • Joined

  • Last visited

Everything posted by AngryDonkey

  1. When it reaches the limits it just keeps on going, hoping for the best 🙂. With the Oculus you can actually set 0ms as well. This should give a slightly better picture during daytime imaging. Mike
  2. If my recent kickstarter projects are anything to go by then it will probably be 2020 😂
  3. Last chance to get Voyager (no affiliation) at 69 Euros 😀, tomorrow it will be more expensive (not sure how much, hopefully not 70 😂). I've been trialling it a bit over the last few weeks and think it's worth changing over to. A lot of work for sure but I'm hopeful it will pay off. I really love the flexibility in this application, you can pretty much decide to do anything with it and are not restricted to what's on offer. I can even write my own program and use Voyager as the engine for it. The website linked above seems to have been removed, the only page to access it now is: http://www.starkeeper.it/software.htm
  4. You can make a nice table out of that palette!
  5. I quite interested in this as it's a question I have asked myself in the past (but I'm not really able to answer it satisfactory). Intuitively I have a feeling that it's not quite as simple as this but not entirely sure why. Can you maybe expand a bit why you think this is true? For example on an average imaging night of 6 hours total why do you think that an image created from 2h R, 2h G, 2h B would be better than let's say an image of 3h L and 1h RGB each? Would the 3h luminance not give a better result than the luminance created from RGB? And how can we quantify the amount of colour data required to create a good image to give colour to the luminance data?
  6. Best thing is to go 1 or 2mm short and then use thin spacer rings to get best distance by trial and error. Also it probably depends on whether the camera manufacturer includes these things in the quoted distances i.e. optical vs. physical path from flange to sensor.
  7. Hello! I'm afraid this will be yet another DIY all sky camera build! 😂 Hopefully interesting though... While developing my all sky software (shameless plug, see signature) one of the biggest problems is that I don't actually have a permanent all sky camera setup myself. I live in the middle of a big city with massive light pollution where the summer temperatures are just creeping up to 40C+, not ideal... So for a while I have been thinking about setting up a remote all sky camera to help with the testing of the AllSkEye app. Initially the idea was to mount it at a relatives house but then once I looked into what would be required to make it fully remote controllable I was thinking that if I go to all that trouble, I might as well look for a location with great weather and dark skies. After a few inquiries I got a really great response from Jose at the E-Eye remote hosting facility in Spain. This was fantastic news because not only will the camera have nice weather and dark skies but the facility also has fibre broadband which is almost a must for what I have in mind further down the road (I am also planning to transfer some image data to cloud storage for archiving and further processing and that could potentially be a lot of data). So this is where it is going to go (all being well and my 3D printer not packing up! I'll try to follow my progress here, maybe it will be helpful for someone. The basic idea is pretty simple: Setup a completely autonomous and remotely controllable all sky camera Sounds easy enough... Well, let me tell you, it is not! To anyone having setup your own remotely hosted scope setup, my hat off to you, it's not an easy task! Initially I split this project into two parts: The camera, lens, housing and everything that goes with it The control box that will control the above Unfortunately I don't have time just now to go into any details but will hopefully be able to do so soon. I just though if I don't start this thread soon I never will 😀. The state of play at the moment is that the control box is pretty complete and the camera housing is nearing completion (3D printer is very busy, not a fast manufacturing process unfortunately). Mike Here are a few pictures of what it looks like at the moment:
  8. The Oculus comes in two variants depending on the lens. One is 180 the other about 150 degrees and you have the latter. When I got mine I had exactly the same thoughts and was thinking about changing the lens (which is easy enough) but to be honest I'm glad I didn't. Unless you really need to see those edges, the 150 degree lens uses the available sensor space much better and therefore gives a slightly better image compared to the 180 lens where a lot of the sensor area remains unused. Of course you need better software to run it ! 🙂 (see signature)
  9. Sorry, not sure but I think if you run it with EQMod then it will be possible (and many other things).
  10. They don't loose their place but the model is not perfect because of imperfect polar alignment, mechanical issues etc. so there will always be some error. Your mount (which is excellent by the way) is capable of building up a pointing model i.e. the more points you plate solve and sync to the mount the better it will be (if the points are well spaced over the entire range). This model will then compensate for the above and make slewing more accurate. Avalon even have a tool for this (XSolver) where these models can easily be created and even saved (although this really only makes sense if you have a permanent setup). For imaging I never build up a model though, I don't think it's required. I put the mount in the home position based on the markers on the mount. Then I use the 'Sync Home Position' function to carry out a rough sync. Then I slew the mount to a target, plate solve and sync and slew again, plate solve and sync etc. until I am on target.
  11. Was there a large slew to the target? The easiest thing would be to plate solve again once you are very close to the target (after the first slew) and then do another slew. That's how the centering process works in most software i.e. slew (and/or rotate), solve and repeat until you are close enough. SGP for example will do all this automatically until you are on target with the correct angle (manual rotation of scope/camera required if you don't have a rotator). Maybe your software has a similar 'center' function? It's also very helpful to setup your field of view (camera/scope) and rotation in your planetarium software so that you already know what to expect. I use SkySafari on my mobile which works great (but I'm sure you can do this on most other programs). When you then take a longish test exposure you can easily see if you are on target.
  12. I've recently been asked about this scope and went back over the available info to form an opinion. What struck me most is that the current retail price is now 3000 euros (+ shipping). Even at the earlier pricing it struck me as a bit expensive for what you get. I do understand the attraction of being able to see some cool sights with little or no astro/computer knowledge required and if money is less of a concern then maybe it's a good solution. I just feel for anyone trying to get into astronomy this would not be such a good first step. For 3000 euros you could get some really good kit...
  13. If you mean exposure time then you need to adjust the 'Auto Exposure' settings. To set how often images are taken you can use the pause/interval time settings in the 'Image Acquisition' section. If using a pause then the program will wait a fixed amount of time until the next exposure is taken after the current exposure is completed. When using an interval the program will attempt to take start image acquisitions at fixed time intervals e.g. every 60s. You won't be able to increase sensitivity but you can amplify the captured signal by adjusting the gain settings of the camera. There are up and downsides to this as far as CMOS sensors are concerned but I am no expert so it might be best to read up. In general to it's best to try to capture more signal i.e. increase exposure time, however this is limited as stars will start to streak at some point. To make an image 'brighter' you can vary the gain setting or use the 'clip and stretch' settings to stretch the captured image more. Hope this helps!
  14. Hello! The windows installer has only been added very recently so that is good feedback to look into. I'm a bit surprised though because AllSkEye does not use the registry at all so if it is related to the installation it must be the installer software itself that's making these entries (and not removing them). It's a widely used commercial installer so it's strange that it should leave entries behind. I will look into it. This has already been added and will be available with the next release. Also it's worth mentioning that all the settings are stored in the users AppData folder. The current uninstall routine will leave this settings file behind (which is realise is not ideal) but I am working on making this optional. There is an issue with this camera where the ascom driver incorrectly reports the bit depth of the returned image. This can be overcome by overriding the bit depth in the advanced settings (set 16). Although using the ZWO native driver as you are doing now is probably better. Kind Regards, Mike
  15. Have you tried adding them manually to the bad pixel map? There is a function for this in PHD.
  16. Now 'fixed' by adding the 1.01x flattener (which is now is included when purchasing the new FSQ85 EDP). Although this won't work when using the reducer.
  17. I think that really depends on what you are trying to achieve. For a quick and 'reasonably good' result that might be true but if you have time to spend and some experience in processing it is most likely the other way round i.e. the mono images will be easier to process because you have the option to process luminance and RGB channels separately which gives you more control and options. In other words I found it quite hard to get a 'really good' image from my OSC camera and it seems much easier with my mono cam.
  18. Same here, got an Intel NUC and a Pegasus Astro Ultimate Powerbox which worked out a fair bit cheaper than the Eagle (and works great). Although I have to say that the Eagle looks like a great piece of kit and offers a few more outputs and attachment options which might come in handy (as well as some pre installed software to manage the system and remote into it).
  19. Or do both at the same time! Last year my portable setup reached a degree of (reliable) automation where after setting it up it would happily run until the morning with little help required (only took 2 years to get there 😀). So now I bought myself a 12 inch Taurus travel dob to enjoy the night sky while the cameras are hoovering up the photons. Very enjoyable!
  20. As mentioned before I'm using the waterproof USB connectors which work well but cost 25 USD for one with a 5m cable. I'm also experimenting with USB to ethernet adapters to be able to increase the distance between camera and computer. These adapter pairs are also not cheap and need additional space but it's easy to run and splice an Ethernet cable.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.