Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

Steve in Boulder

Members
  • Posts

    94
  • Joined

  • Last visited

Everything posted by Steve in Boulder

  1. Thanks, Martin! That makes sense. A related question: If I am going to bin 2 on my lights, do I have to bin 2 when I take the calibration frames?
  2. Really looking forward to trying this out! Tooltips are a good idea for new users. I think I’ll use the native ASI camera support for my calibration frames, though not for the lights for now as I’d rather sit inside (I.e. wireless) with my laptop during the winter. Related question: if I do use the native support for calibration frames, but with lights taken separately and going to a watched folder, can I still use the ROI feature?
  3. I don't know very much about image processing, is the following correct? To apply this algorithm, I have to debayer the image first. Then use the algorithm on each pixel per pixel. Then, to operate with Jocular, I'd have to mimic what it does with unprocessed OSC images, creating L, R, G, and B image files from the debayered, SCNR-corrected file and feeding them to the watched directory. To Jocular, it should then look like these are individual frames coming from a filter wheel.
  4. For cropping the images, I use the astropy library (which Jocular also uses). Don't know if Lua has anything similar. I can borrow/steal from Jocular code to do de-bayering, at least. Though I think I'd take on green correction first. Do you have any pointers to the algorithms one might want?
  5. Well, now you have someone to share the blame with. I wrote a little Python script to transfer files from the ASIAIR Pro (via SMB) to Jocular's watched directory (the two do not play nicely together in terms of directory and file structure). It does one bit of optional pre-processing, it can crop the files to create a poor man's version of ROI. I might look into adding some more pre-processing ability to it.
  6. Thanks, that all makes sense now. I suppose splitting an OSC file into its components does lose the information that the three components were imaged simultaneously and should align together as is. I do stay well below the weight limit for my mount (AZ Mount Pro), so it could be that the tracking is solid. Or just dumb luck. I’ll have to play with the stack features some more to get a feel for them.
  7. Bad assumption on my part, extrapolating from Unix-style availability. Sorry.
  8. I'm curious about how much the stack features (remove sub, reshuffle) get used. I've tried them with a monochrome camera, and they seem to work well, but I'd probably have to spend extended time with an object to notice significant differences. With an OSC, the number of subs grows by three, and LRGB by four. So the performance degrades correspondingly. I could try more aggressive binning to avoid this. But I wonder if the utility of the stack features outweighs this drawback? I see it would be a major change to the design of Jocular to implement an option to not retain all the subs in memory, but I'm curious about that.
  9. From what I understand (not much), the primary advantage of INDI is its client-server model. This allows it to operate remotely (presumably TCP/IP). I very much like working without a cable to my laptop. Right now, I use the ASIAIR Pro for all imaging (run via Bluestacks and the Android ASIAIR app), and SMB to access the files for Jocular. An INDI server is easy to set up on a RPi with Astroberry, and an INDI client would work nicely with that. Agreed, JSON >> XML for this type of application. If Alpaca supports a client-server model--I see mention at CN of an Alpaca Remote Server--then that would be nice. All that said, I'm currently very happy using the ASIAIR Pro for imaging, so this would be very low priority from this user's point of view. P.S.: Don't forget set gain in your simpleastro.py! 🙂
  10. Just a thought -- maybe it would be most expeditious to provide an INDI client interface, and let users run the INDI server appropriate to their system (Win, Mac, RPi). I think I spotted a Python INDI library somewhere.
  11. Never mind, I found the Network Settings widget. Thanks again!
  12. Is there some file I need to edit to change the priorities? I'd like to use the Astroberry WiFi network while in the field and use my home network at, well, home.
  13. Gotcha. Currently my RPi is creating its own WiFi network, called "Astroberry." I can connect to it from my laptop by joining that network, but I lose Internet connectivity. Is there a way to have the RPi join my home network and allow me to connect through that?
  14. Thanks, very helpful! I've just loaded Astroberry onto my RPi and was trying out VNC, but this is clearly a better way. Do I need to do anything on the RPi to configure or start the INDI server? Or do I just plug in all my gear and configure on my laptop from the client side?
  15. Thanks Martin for the response and explanations! From my limited sample, I agree that color balance doesn’t need adjustment, at least not a high priority. Re exporting the settings - I only do EEA as well, on an alt-az mount, but it’s nice to share images with friends and family afterwards. So a little of the AP-wow factor creeps in. I’m excited about getting Jocular running real time, once I figure out Astroberry. Definitely more work than the ASIAIR Pro I’m using now. Thanks for all your work on the tool!
  16. Hi, I've been experimenting with Jocular, just manually dropping FITs into the watched folder for now. I'm planning to use it with a watched folder on my laptop in tandem with an Astroberry RPi on the mount taking the images. Some random thoughts for what they're worth: Jocular does a great job with the auto blackpoint! I really like being able to switch the stretching algorithm. My platesolving doesn't work until I increase the match threshold to 50 arc-seconds. Probably user error, still experimenting. The GIF feature is super cool. Annotations, too. Any chance using the Bottleneck library would speed up stacking a tad? I'm using the software binning to reduce file sizes. It might be nice if the settings for the current stack could be exported and run separately on the original FITs to produce a full-size image to save for fame, fortune, etc. The LAB sliders (G<->R, B<->Y) are a bit small and fiddly. Eventually, I might use a standalone Python script to retrieve the files from the Astroberry RPi and then preprocess them a bit, say setting a FWHM rejection level, before sending them to the watched folder.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.