Jump to content

sgl_imaging_challenge_banner_31.thumb.jpg.b7a41d6a0fa4e315f57ea3e240acf140.jpg

Jocular: a tool for astronomical observing with a camera


Recommended Posts

Annotation is AMAZING!

I particularly like the angular scale too.

(BTW, should I be posting elsewhere, so as not to convolute this thread?)

Link to post
Share on other sites
  • Replies 64
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

Jocular: A tool for astronomical observing with a camera I'm pleased to announce the first release of Jocular, an open-source, cross-platform application for doing EEVA-like things. I'd like

Jocular v0.3 is now available. Jocular does 3 interlinked things: (1) live EEVA capture/processing; (2) automatic organisation of captures to allow reloading/reprocessing of previous captures; and (3)

Jocular v0.2 Here is a minor update to Jocular along with a new 3-step minimalist installation procedure for MacOS, Windows or Linux. These steps only need to be followed the first time you insta

Posted Images

The frame size around image is for snapshots so won't affect this. I've just explored switching off the processing I'm doing and it occasionally causes star extraction artefacts that interfere with platesolving (*), so I need to find a better solution.

 

(*) The processing I'm doing attempts to remove edges that can appear when the subs move between alignments. These edges can cause things to look like (bright)  stars, which are then selected in part in preference with the real things, causing occasional platesolving failures. I'm filling things in with a local estimate of the image background with variance controlled by an estimate of the noise distribution, but it clearly isn't doing the right thing for your images. I'll find a different solution which doesn't involve messing about with the edges. You might think it is just a case of not selecting stars that are too near to an edge, but my images at least can move quite a bit, and also I need all the stars I can get for small FOVs 🙂

Link to post
Share on other sites
Posted (edited)

The angular scale needs a bit more work so it changes to more sensible units for those with larger sensors... 

Glad you like the annotation. I assume you've downloaded all the deep catalogues?

Posting here is fine by me!

Martin

Edited by Martin Meredith
  • Like 1
Link to post
Share on other sites
Posted (edited)

This is the result of dropping the OSC image you sent me into the watched folder. There is a user-configurable Bayer pattern which is used to split 'alien' (ie none-Jocular-created) subs into RGB. The LRGB module then creates a synthetic luminance automatically to deliver the veritable joys of LAB-colourspace manipulation.

Of course, real-time LAB work is rather painfully slow as you might imagine (move slider, wait 2 seconds, ...). But in principle it is all there now. I think the solution is to bin large images that result from debayering, as you suggest (or get the cores to do some work for a change). There are a few different debayering algorithms available of which I think I chose the slowest for this example, though that is just a once-only cost. I can make the choice of algorithm also user-configurable.

I hope this does not result in a sudden rush of OSC camera users 😉

My only OSC camera is the Lodestar C and that uses a CMYK arrangement....

 2074642450_21Mar21_20_33_58.jpg.b778cf48c5df6849cb512aea415ae3b9.jpg

Edited by Martin Meredith
Link to post
Share on other sites

Again, annotations are brilliant.  I'm detecting all sorts of things that I hadn't previously recognised in images.

One small issue: when searching for the faint fuzzies, I often use the inverted image.  Unfortunately, I can't then read the pop-up info from the catalogues ...

 

1235848796_Screenshot2021-03-21at22_09_30.thumb.png.cca8d9a59ca5a59469bcbbd9904f6bd4.png

Link to post
Share on other sites

All I can say is that the GUI toolkit I use runs on Linux, Windows, OS X, Android, iOS, and Raspberry Pi. I have no experience at all with iOS so I've no idea if it will work. A priori I doubt it. I don't have an iOS device but the code is on GitHub if someone with iOS experience wants to give it a whirl... 

 

 

Link to post
Share on other sites

Just a note that the latest version (0.4.2) includes experimental support for OSC camera debayering and binning. Thanks to AKB for requesting and testing these features.

I will update the documentation ion due course but if you want to use it right now, to configure debayering, go to the Watcher panel in configuration and choose the Bayer matrix (one of GBRG, RGGB, BGGR or GRBG), or leave it on mono if you don't use a colour sensor. Note that I don't currently support non-RGB schemes (ie CMYK). Binning can be selected on the same panel. 

It is important to understand what will be done to your FITs that your capture program drops into the watched directory:🙄

1. Debayering will produce 3 new FITs, one for each of RGB, and these will get incorporated into the Jocular ecosystem. This means they get placed in a directory under captures/session/object e.g. captures/21_03_23/Messier_42, so you can find them and Jocular can easily reload them in the future. Your original colour FITs will be copied to a subdirectory called 'originals' e.g. captures/21_03_23/Messier_42/originals, so you still have access to them in case you want to use them with other applications. 

2. Binning will similarly produce new fits (just one, not three!), and they get assimilated in the same way, with the original being kept. It would have been possible to apply binning each and every time you reload ie not produce new FITs, but I took this design decision to speed up subsequent reloads.

On this last point, I want to stress again that Jocular only really works in a comfortable way for relatively small pixel counts (*)  I would say it is happy enough up to 1 to 2Mpixels. That's why if you have a large colour or indeed mono sensor I would encourage you to use 2x2 or even 3x3 binning to get the most out of the tool. Of course, you could bin on camera (and lose the data for ever!), or bin in your capture program, or use ROI, but if you want to preserve the full sensor size and benefit from some of Jocular's functionality at the same time, one approach is to get Jocular to do the binning and store your unbinned originals at the same time.

Cheers

Martin

(*) Jocular occupies a different niche than other EEVA applications in that it supports mid-stack 'changes of mind' (forgot to use darks? click darks and click reprocess; want to see if median stack combination will get rid of that satellite trail; click median and its done -- that kind of thing). Above all, no wasted photons restarting the stack! This is compute and memory intensive for large pixel count sensors. The same goes for things like gradient removal or LAB colour combination, which are real time for small sensors, meaning you can interact directly with the image without going thru the intermediate stage of histograms. 

  • Like 1
Link to post
Share on other sites

So excited to have been able to test out the latest OSC / binning options.  This means that I can use a big(-ish) CMOS OSC (or mono) really comfortably within Jocular, but still be able to go back and process the full resolution offline.  I think this is a real game changer for outreach / online observing sessions, not to say individual use!  Whilst you may lose resolution through binning, you certainly gain in signal to noise ratio.

Here's an example from some old data taken on 26-Feb-2019, using an ZWO ASI 294MC camera which has 4,144×2,822 pixels (so ~11 mega-pixels.)  This is just 5 x 60 seconds (although the annotation shows it as 15 x 60 seconds because it's counting R G B separately) and binned 3x3 giving very comfortably sized ~1 .3 mega-pixel images for Jocular to play with. This is without any calibration frames (I think there's still a bit more work that Martin has to do there.)

Hats off (once again) to Martin.

Tony

16282354_Sh2-27723Mar21_21_58_48.thumb.jpg.0507f53648364a1ee2298c8a6032e9f5.jpg

Edited by AKB
  • Like 2
Link to post
Share on other sites
On 24/03/2021 at 13:06, AKB said:

This is without any calibration frames (I think there's still a bit more work that Martin has to do there.)

@Martin Meredith

Just checked on GitHub, and it looks like you fixed the calibration issue now?

Link to post
Share on other sites
Posted (edited)

I added an extra check a few days back which I'm pretty sure in is the latest PyPi release (0.4.2) so if you do an update without mentioning the version name it should be ok. Haven't had a lot of time to work thru any other ramifications, but I think it will at least not try to debayer calibration frames!

[Edit] Just updated to v0.4.3 which fixes a bug which occurs if you try to load an OSC image in mono mode, and an option to choose binning by averaging (as on the camera) rather than by interpolation (which is more accurate as it handles antialiasing).

Martin

Edited by Martin Meredith
  • Thanks 1
Link to post
Share on other sites
3 hours ago, Martin Meredith said:

Just updated to v0.4.3

Where, incidentally, can I find the version number of the installed system?

Link to post
Share on other sites
Posted (edited)

Not sure it is exposed anywhere. I will add a version option to command line for the next release. However, if you track down the location of the module on your system it is the only line of code in __init__.py

The version will also appear in the title bar.

Martin

Edited by Martin Meredith
Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    No registered users viewing this page.

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.