Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

Jocular: a tool for astronomical observing with a camera


Recommended Posts

The frame size around image is for snapshots so won't affect this. I've just explored switching off the processing I'm doing and it occasionally causes star extraction artefacts that interfere with platesolving (*), so I need to find a better solution.

 

(*) The processing I'm doing attempts to remove edges that can appear when the subs move between alignments. These edges can cause things to look like (bright)  stars, which are then selected in part in preference with the real things, causing occasional platesolving failures. I'm filling things in with a local estimate of the image background with variance controlled by an estimate of the noise distribution, but it clearly isn't doing the right thing for your images. I'll find a different solution which doesn't involve messing about with the edges. You might think it is just a case of not selecting stars that are too near to an edge, but my images at least can move quite a bit, and also I need all the stars I can get for small FOVs 🙂

Link to comment
Share on other sites

The angular scale needs a bit more work so it changes to more sensible units for those with larger sensors... 

Glad you like the annotation. I assume you've downloaded all the deep catalogues?

Posting here is fine by me!

Martin

Edited by Martin Meredith
  • Like 1
Link to comment
Share on other sites

This is the result of dropping the OSC image you sent me into the watched folder. There is a user-configurable Bayer pattern which is used to split 'alien' (ie none-Jocular-created) subs into RGB. The LRGB module then creates a synthetic luminance automatically to deliver the veritable joys of LAB-colourspace manipulation.

Of course, real-time LAB work is rather painfully slow as you might imagine (move slider, wait 2 seconds, ...). But in principle it is all there now. I think the solution is to bin large images that result from debayering, as you suggest (or get the cores to do some work for a change). There are a few different debayering algorithms available of which I think I chose the slowest for this example, though that is just a once-only cost. I can make the choice of algorithm also user-configurable.

I hope this does not result in a sudden rush of OSC camera users 😉

My only OSC camera is the Lodestar C and that uses a CMYK arrangement....

 2074642450_21Mar21_20_33_58.jpg.b778cf48c5df6849cb512aea415ae3b9.jpg

Edited by Martin Meredith
Link to comment
Share on other sites

Again, annotations are brilliant.  I'm detecting all sorts of things that I hadn't previously recognised in images.

One small issue: when searching for the faint fuzzies, I often use the inverted image.  Unfortunately, I can't then read the pop-up info from the catalogues ...

 

1235848796_Screenshot2021-03-21at22_09_30.thumb.png.cca8d9a59ca5a59469bcbbd9904f6bd4.png

Link to comment
Share on other sites

All I can say is that the GUI toolkit I use runs on Linux, Windows, OS X, Android, iOS, and Raspberry Pi. I have no experience at all with iOS so I've no idea if it will work. A priori I doubt it. I don't have an iOS device but the code is on GitHub if someone with iOS experience wants to give it a whirl... 

 

 

Link to comment
Share on other sites

Just a note that the latest version (0.4.2) includes experimental support for OSC camera debayering and binning. Thanks to AKB for requesting and testing these features.

I will update the documentation ion due course but if you want to use it right now, to configure debayering, go to the Watcher panel in configuration and choose the Bayer matrix (one of GBRG, RGGB, BGGR or GRBG), or leave it on mono if you don't use a colour sensor. Note that I don't currently support non-RGB schemes (ie CMYK). Binning can be selected on the same panel. 

It is important to understand what will be done to your FITs that your capture program drops into the watched directory:🙄

1. Debayering will produce 3 new FITs, one for each of RGB, and these will get incorporated into the Jocular ecosystem. This means they get placed in a directory under captures/session/object e.g. captures/21_03_23/Messier_42, so you can find them and Jocular can easily reload them in the future. Your original colour FITs will be copied to a subdirectory called 'originals' e.g. captures/21_03_23/Messier_42/originals, so you still have access to them in case you want to use them with other applications. 

2. Binning will similarly produce new fits (just one, not three!), and they get assimilated in the same way, with the original being kept. It would have been possible to apply binning each and every time you reload ie not produce new FITs, but I took this design decision to speed up subsequent reloads.

On this last point, I want to stress again that Jocular only really works in a comfortable way for relatively small pixel counts (*)  I would say it is happy enough up to 1 to 2Mpixels. That's why if you have a large colour or indeed mono sensor I would encourage you to use 2x2 or even 3x3 binning to get the most out of the tool. Of course, you could bin on camera (and lose the data for ever!), or bin in your capture program, or use ROI, but if you want to preserve the full sensor size and benefit from some of Jocular's functionality at the same time, one approach is to get Jocular to do the binning and store your unbinned originals at the same time.

Cheers

Martin

(*) Jocular occupies a different niche than other EEVA applications in that it supports mid-stack 'changes of mind' (forgot to use darks? click darks and click reprocess; want to see if median stack combination will get rid of that satellite trail; click median and its done -- that kind of thing). Above all, no wasted photons restarting the stack! This is compute and memory intensive for large pixel count sensors. The same goes for things like gradient removal or LAB colour combination, which are real time for small sensors, meaning you can interact directly with the image without going thru the intermediate stage of histograms. 

  • Like 1
Link to comment
Share on other sites

So excited to have been able to test out the latest OSC / binning options.  This means that I can use a big(-ish) CMOS OSC (or mono) really comfortably within Jocular, but still be able to go back and process the full resolution offline.  I think this is a real game changer for outreach / online observing sessions, not to say individual use!  Whilst you may lose resolution through binning, you certainly gain in signal to noise ratio.

Here's an example from some old data taken on 26-Feb-2019, using an ZWO ASI 294MC camera which has 4,144×2,822 pixels (so ~11 mega-pixels.)  This is just 5 x 60 seconds (although the annotation shows it as 15 x 60 seconds because it's counting R G B separately) and binned 3x3 giving very comfortably sized ~1 .3 mega-pixel images for Jocular to play with. This is without any calibration frames (I think there's still a bit more work that Martin has to do there.)

Hats off (once again) to Martin.

Tony

16282354_Sh2-27723Mar21_21_58_48.thumb.jpg.0507f53648364a1ee2298c8a6032e9f5.jpg

Edited by AKB
  • Like 2
Link to comment
Share on other sites

I added an extra check a few days back which I'm pretty sure in is the latest PyPi release (0.4.2) so if you do an update without mentioning the version name it should be ok. Haven't had a lot of time to work thru any other ramifications, but I think it will at least not try to debayer calibration frames!

[Edit] Just updated to v0.4.3 which fixes a bug which occurs if you try to load an OSC image in mono mode, and an option to choose binning by averaging (as on the camera) rather than by interpolation (which is more accurate as it handles antialiasing).

Martin

Edited by Martin Meredith
  • Thanks 1
Link to comment
Share on other sites

Not sure it is exposed anywhere. I will add a version option to command line for the next release. However, if you track down the location of the module on your system it is the only line of code in __init__.py

The version will also appear in the title bar.

Martin

Edited by Martin Meredith
Link to comment
Share on other sites

  • 5 weeks later...

Not that anyone in their right mind would want to, but if you do want to use Jocular on the Raspberry Pi it is possible.

As I have recently got hold of the RPi4 and Astroberry, I tried it out. First thing is to note that the default on the RPi4 is python2, so it is necessary to do the install using

python3 -m pip install jocular

Then, there are some additional steps to handle the GUI toolkit which you can either read about here https://kivy.org/doc/stable/installation/installation-rpi.html#install-source-rpi or just do the following:

sudo apt update

sudo apt install pkg-config libgl1-mesa-dev libgles2-mesa-dev \
   libgstreamer1.0-dev \
   gstreamer1.0-plugins-{bad,base,good,ugly} \
   gstreamer1.0-{omx,alsa} libmtdev-dev \
   xclip xsel libjpeg-dev

 

sudo apt install libsdl2-dev libsdl2-image-dev libsdl2-mixer-dev libsdl2-ttf-dev

Finally, you need to add this to your .profile file in your home directory

PATH="/home/astroberry/.local/bin:$PATH"

(this last step shouldn't be necessary but apparently is...).

Martin

 

  • Thanks 1
Link to comment
Share on other sites

3 hours ago, Martin Meredith said:

Not that anyone in their right mind would want to, but if you do want to use Jocular on the Raspberry Pi it is possible.

As I have recently got hold of the RPi4 and Astroberry, I tried it out. First thing is to note that the default on the RPi4 is python2, so it is necessary to do the install using

python3 -m pip install jocular

Then, there are some additional steps to handle the GUI toolkit which you can either read about here https://kivy.org/doc/stable/installation/installation-rpi.html#install-source-rpi or just do the following:


sudo apt update

sudo apt install pkg-config libgl1-mesa-dev libgles2-mesa-dev \
   libgstreamer1.0-dev \
   gstreamer1.0-plugins-{bad,base,good,ugly} \
   gstreamer1.0-{omx,alsa} libmtdev-dev \
   xclip xsel libjpeg-dev

 


sudo apt install libsdl2-dev libsdl2-image-dev libsdl2-mixer-dev libsdl2-ttf-dev

Finally, you need to add this to your .profile file in your home directory

PATH="/home/astroberry/.local/bin:$PATH"

(this last step shouldn't be necessary but apparently is...).

Martin

 

Martin,

Can confirm works for me although had to edit a file to stipulate which version of python as mine kept coming back as ver 2.75 

Andy

Link to comment
Share on other sites

Ah, yes you're right. I forgot I had to do that too. RPi now resting for the day... but I think this is what did the trick

update-alternatives --install /usr/bin/python python /usr/bin/python3.7 1

from here:

https://linuxconfig.org/change-default-python-version-on-raspbian-gnu-linux


Thanks for trying it out Andy. BTW I'm just working on native support for ASI cameras and ASCOM so hope to have some news on that soonish.

Martin

  • Like 1
Link to comment
Share on other sites

10 minutes ago, Martin Meredith said:

Ah, yes you're right. I forgot I had to do that too. RPi now resting for the day... but I think this is what did the trick



update-alternatives --install /usr/bin/python python /usr/bin/python3.7 1

from here:

https://linuxconfig.org/change-default-python-version-on-raspbian-gnu-linux


Thanks for trying it out Andy. BTW I'm just working on native support for ASI cameras and ASCOM so hope to have some news on that soonish.

Martin

Martin I love your cloaking of the text to test the brain black on black. What i did was edit 

nano ~/.bashrc

then added at the end

alias python='/usr/bin/python3'

 


 alias pip=pip3
Edited by fozzybear
added text
Link to comment
Share on other sites

1 minute ago, Martin Meredith said:

Sorry! I use the white background theme so it looks great to me 🙂

Guessing this is just an issue with cut & paste of code...

BTW yours is now white-on-white for me 😉

As a non-linux user I think I prefer your solution.

Thanks for alerting me!

Martin

Martin,

whatever floats your boat. me flit between windows and linux  no harm done great job by the way. Kudos to you

Andy

  • Thanks 1
Link to comment
Share on other sites

  • 1 month later...

Here it is wall-to-wall sun 🙂 (plus moon soon) but all my friends in the UK are complaining about rain...

BTW the next version which will be out in a month or so will contain support for ASCOM cameras, filterwheels (and perhaps mounts), in case that is of any use to you. I recently acquired a Windows machine and a RPi to test this stuff on so now I am utterly confused by different OSs and keyboard layouts.... I'm also hoping to support ASI cameras natively but don't have one to test.

Martin

 

  • Like 2
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.