Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

Steve in Boulder

Members
  • Posts

    94
  • Joined

  • Last visited

Posts posted by Steve in Boulder

  1. 23 minutes ago, Martin Meredith said:

    Hi Steve

    Not at present but if I can implement it I will. One problem is that for various reasons I'm determining ROI during the framing stage, so those captures would need to be done in Jocular. What I want to do is further rationalise the whole binning/ROI stuff so that it is camera-independent (at which point it will also apply to 'watched' captures).

    By the way, the watched folder functionality has changed a little in ways that I will try to document before the month is out. 

    I've only been able to properly test the darks so far so it will be interesting to see how you get on with flats. There is nascent support for choosing between bias and flat-darks, and a placeholder for using a constant a la Siril too. I tend not to use flats myself but when I do I use twilight flats, and I just haven't been around at twilight recently... 

    Good luck!

    Martin

     

     

    Thanks, Martin!  That makes sense.  A related question: If I am going to bin 2 on my lights, do I have to bin 2 when I take the calibration frames?

  2. 25 minutes ago, Martin Meredith said:

    The new release of Jocular is now available. 

    This is a major new version which supports ASI cameras natively across the 3 major operating systems. ROI/binning are supported for ASI cams, and the calibration library applies automatically to ROI-ed images. There are changes to the GUI to incorporate material design widgets, silent logging, easier installation and a host of other small changes including being able to change the location of the watched directory, though nothing major in terms of processing or object databases. Oh, and it has (annoying) tooltips for which the most important information is probably how to switch them off (conf/Appearance...).

    For further details, see:

    https://transpy.eu.pythonanywhere.com/jocular/

    The download (~29M) now comes bundled with a couple of example captures.

    I've developed and tested this on OSX using an ASI 290MM. I haven't been able to test on Windows but in theory it ought to work.

    Please let me know (I'm sure you will!) of any issues 🙂

    Martin

    PS There is also (highly) experimental native support for the SX Ultrastar which I haven't been able to test, so any feedback from Ultrastar users on Macs would be appreciated (pretty sure this won't work on Windows but who's to say). Similarly untested is ASCOM filterwheel control. I'm planning to support ASI filterwheels in the near-ish future.

    Really looking forward to trying this out!  Tooltips are a good idea for new users.  I think I’ll use the native ASI camera support for my calibration frames, though not for the lights for now as I’d rather sit inside (I.e. wireless) with my laptop during the winter.  Related question:  if I do use the native support for calibration frames, but with lights taken separately and going to a watched folder, can I still use the ROI feature?

  3. 20 hours ago, Martin Meredith said:

    In case it helps, I use the colour_demosaicing module for debayering. It comes down to just 2 lines of code:

    from colour_demosaicing import demosaicing_CFA_Bayer_bilinear
    rgb = demosaicing_CFA_Bayer_bilinear(im, pattern='RGGB')
    

    where im is the 2D image to debayer, and pattern can be any of the usual colour pixel configurations. rgb is a N x M x 3 array.

    For binning/super-pixels as Tony describes you can use either rescale (for non-integer interpolation) or downscale_local_mean (for what we normally understand as binning i.e. downscaling by integer factors)

    from skimage.transform import rescale, downscale_local_mean
    
    rescale(im, 1 / binfac, anti_aliasing=True, mode='constant',  preserve_range=True, multichannel=False)
    

    or  

    downscale_local_mean(im, (binfac, binfac))
    

    Martin

    Thanks, Martin!

  4. On 05/11/2021 at 11:36, AKB said:

    Lua can link to anything, so I should be able to use that, although you pay the price in footprint of the code.  Lua is tiny, Python is not.

    This is really simple.  The algorithm called SCNR...

    https://www.pixinsight.com/doc/legacy/LE/21_noise_reduction/scnr/scnr.html

    If you invert the image, you can also use this for removing magenta star colour in Hubble palettes, and the like.

    I don't know very much about image processing, is the following correct?  To apply this algorithm, I have to debayer the image first.  Then use the algorithm on each pixel per pixel.  Then, to operate with Jocular, I'd have to mimic what it does with unprocessed OSC images, creating L, R, G, and B image files from the debayered, SCNR-corrected file and feeding them to the watched directory.  To Jocular, it should then look like these are individual frames coming from a filter wheel.  

  5. 6 minutes ago, AKB said:

    My language of choice for scripting is, in fact, Lua.  This rather cuts me out of tinkering with the internals of Jocular, but is perfectly fine for separate scripts like this.  Would be interested, nonetheless, in what functionality you may add to your script.

    Tony

    For cropping the images, I use the astropy library (which Jocular also uses).   Don't know if Lua has anything similar.  I can borrow/steal from Jocular code to do de-bayering, at least.  Though I think I'd take on green correction first.  Do you have any pointers to the algorithms one might want?  

  6. 1 hour ago, AKB said:

    Er, um, ... I suppose I take the blame, then?

    Fully agree that OSC needs a rethink.  For me, the ideal solution would simply be a separate process which watches files created by the camera software (which I'm very happy with remaining separate, partly because I can dither and auto-focus, and you're not going to put that into Jocular for a while!)   The process would then do the debayering (and possibly binning) of choice and spit out the separate files that Jocular needs. This seems to me to be altogether the cleanest approach.  It would also allow, perhaps, a bit of RGB processing which help with OSC – particularly the suppression of excess green.

    I tried the .dev version recently, but, as you say, it's not complete, so reverted to 0.4.5 (it's SO easy - thanks) and have had great fun with it recently, including the use of flats.  I'll post some recent observations shortly.

    Tony.

    Well, now you have someone to share the blame with.   I wrote a little Python script to transfer files from the ASIAIR Pro (via SMB) to Jocular's watched directory (the two do not play nicely together in terms of directory and file structure).  It does one bit of optional pre-processing, it can crop the files to create a poor man's version of ROI.   I might look into adding some more pre-processing ability to it.  

    • Like 1
  7. 8 hours ago, Martin Meredith said:

    I use the remove from stack feature all the time and I believe others do too. Maybe you have excellent tracking and no wind gusts?

    Reshuffle is particularly useful to solve stack alignment 'failures'. It comes into its own with LRGB or narrowband+L where there are far fewer stars on e.g. the B subs.

    The 70-80-90-median are also useful to get rid of satellites etc

    The OSC thing was really just an add-on requested feature from one user to get things going quickly. Jocular wasn't really designed with this in mind. Handling OSC requires a rethink...

    cheers

    Martin

    Thanks, that all makes sense now.  I suppose splitting an OSC file into its components does lose the information that the three components were imaged simultaneously and should align together as is.

    I do stay well below the weight limit for my mount (AZ Mount Pro), so it could be that the tracking is solid.  Or just dumb luck.  I’ll have to play with the stack features some more to get a feel for them. 

  8. I'm curious about how much the stack features (remove sub, reshuffle) get used.  I've tried them with a monochrome camera, and they seem to work well, but I'd probably have to spend extended time with an object to notice significant differences.  

    With an OSC, the number of subs grows by three, and LRGB by four.  So the performance degrades correspondingly.  I could try more aggressive binning to avoid this.  But I wonder if the utility of the stack features outweighs this drawback?  I see it would be a major change to the design of Jocular to implement an option to not retain all the subs in memory, but I'm curious about that.

  9. 1 hour ago, Martin Meredith said:

    Hi Steve

    I looked into pyindi but it is very much Linux-oriented (last time I looked there was a library required that would need rebuilding for OSX).

    In fact, I looked at ASCOM, INDI and ALPACA over the summer (from a coding point of view). Personal opinions: I like INDI least. ASCOM was much more 'obvious' but not multi-platform. Both ASCOM and INDI are based on an outdated and verbose protocol (XML), for which reason I think JSON-based ALPACA is the way to go.

    But from an application programmer's perspective, what is actually required (in my opinion) is a 'simpleastro.py' that implements the most commonly-used 90% of functionality (move filter wheel, expose for N secs, set temperature, move mount to X, Y; sync mount, and little else) in a way that is cross-platform. This would be a wrapper around some of the above. Those applications that are cross-platform (there are not many of those!) either do this anyway, or transform everything into e.g. ALPACA.

    Martin

     

    Added in edit: you mention a win-based INDI server. I heard there was such a development some time ago but is it actually realised/maintained/working?

    From what I understand (not much), the primary advantage of INDI is its client-server model.  This allows it to operate remotely (presumably TCP/IP).  I very much like working without a cable to my laptop.  Right now, I use the ASIAIR Pro for all imaging (run via Bluestacks and the Android ASIAIR app), and SMB to access the files for Jocular.   An INDI server is easy to set up on a RPi with Astroberry,  and an INDI client would work nicely with that.  

    Agreed, JSON >> XML for this type of application.  If Alpaca supports a client-server model--I see mention at CN of an Alpaca Remote Server--then that would be nice.  

    All that said, I'm currently very happy using the ASIAIR Pro for imaging, so this would be very low priority from this user's point of view.  

    P.S.:  Don't forget set gain in your simpleastro.py!  🙂

    • Like 1
  10. On 31/10/2021 at 11:13, Martin Meredith said:

    I'm afraid not. What OS are you running? It is most likely that I'll provide native support for Ultrastar for Macs in the next release if someone is willing to test it.

    Martin

    Just a thought -- maybe it would be most expeditious to provide an INDI client interface, and let users run the INDI server appropriate to their system (Win, Mac, RPi).  I think I spotted a Python INDI library somewhere.  

  11. 1 hour ago, CedricTheBrave said:

    easiest way is to use a monitor and keyboard on the rpi and just connect to the local wifi

    make sure you either remove the hotspot setup or change its priority to stop it being the prefered network on boot

     

    Is there some file I need to edit to change the priorities?  I'd like to use the Astroberry WiFi network while in the field and use my home network at, well, home.  

  12. 8 minutes ago, Stuart1971 said:

    Make sure your PC and RPI are in the same network, then in the ekos profile on your windows PC, select “remote” and put “Astroberry.local” in the box, and save, then when you start the profile and and connect in ekos, it will know where the INdI server is and use it as if it was all on the same PC….👍🏼

    Gotcha.  Currently my RPi is creating its own WiFi network, called "Astroberry."  I can connect to it from my laptop by joining that network, but I lose Internet connectivity.  Is there a way to have the RPi join my home network and allow me to connect through that?

  13. 5 hours ago, Stuart1971 said:

    No, you have this all wrong…

    You load Kstars / Ekos on your PC

    Ypu have an RPI with the INdI sever or easier just put the free Astroberry on, it includes all you need

    Then open Ekos on your PC and create a user profile with all your kit

    Make sure you have the rpi server address in the box, so that would be astroberry.local and click “ remote” so it k kw to look on your network for the server

    Then save and close and just use the Kstar / Ekos software on your PC ….. simples….and takes 5 mins to set up…

    Thanks, very helpful!  I've just loaded Astroberry onto my RPi and was trying out VNC, but this is clearly a better way.  Do I need to do anything on the RPi to configure or start the INDI server?  Or do I just plug in all my gear and configure on my laptop from the client side?

  14. Thanks Martin for the response and explanations!  From my limited sample, I agree that color balance doesn’t need adjustment, at least not a high priority.  Re exporting the settings -  I only do EEA as well, on an alt-az mount, but it’s nice to share images with friends and family afterwards.  So a little of the AP-wow factor creeps in. 
     

    I’m excited about getting Jocular running real time, once I figure out Astroberry.  Definitely more work than the ASIAIR Pro I’m using now.  Thanks for all your work on the tool!

  15. Hi,

    I've been experimenting with Jocular, just manually dropping FITs into the watched folder for now.   I'm planning to use it with a watched folder on my laptop in tandem with an Astroberry RPi on the mount taking the images.  Some random thoughts for what they're worth:

    • Jocular does a great job with the auto blackpoint!  I really like being able to switch the stretching algorithm.
    • My platesolving doesn't work until I increase the match threshold to 50 arc-seconds.  Probably user error, still experimenting.  
    • The GIF feature is super cool.  Annotations, too. 
    • Any chance using the Bottleneck library would speed up stacking a tad?
    • I'm using the software binning to reduce file sizes.  It might be nice if the settings for the current stack could be exported and run separately on the original FITs to produce a full-size image to save for fame, fortune, etc.  
    • The LAB sliders (G<->R, B<->Y) are a bit small and fiddly.  

    Eventually, I might use a standalone Python script to retrieve the files from the Astroberry RPi and then preprocess them a bit, say setting a FWHM rejection level, before sending them to the watched folder.  

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.