Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

Jocular: a tool for astronomical observing with a camera


Recommended Posts

14 hours ago, PatG said:

In preparation for downloading Jocular, I have checked my Windows 10 laptop and don't have Python installed. I see from the Jocular web site that I need to download a Python V3.6-3.9 but not V3.10

On the Python website, I'm looking at Python download 3.9.12 which is latest version 3.9. I'm not sure which batch of files to download as there seem many and I want to make sure I am downloading the correct pack. I'm looking at this webpage.

Python Releases for Windows | Python.org

I'm also unsure whether I need to download a 32 bit or 64 bit version - 

Advice/guidance much appreciated.

Thanks 

Pat

Hi Pat

Feel free to PM me if you come across any specific issues. Once you get Jocular running you'll find the logs in the joculardata directory and these are most useful for me to spot any problems.

Good luck

Martin

Link to comment
Share on other sites

 Martin, I plan to start experimenting with the narrowband features soon.  A couple questions:

  • How is the palette determined?  Is there something to configure in the JSON file?
  • Can one do, for example, an L + Ha stack, pause the stack, switch to L + OIII, and resume stacking (perhaps just adding OIII subs)?
Link to comment
Share on other sites

1 hour ago, Steve in Boulder said:

 Martin, I plan to start experimenting with the narrowband features soon.  A couple questions:

  • How is the palette determined?  Is there something to configure in the JSON file?
  • Can one do, for example, an L + Ha stack, pause the stack, switch to L + OIII, and resume stacking (perhaps just adding OIII subs)?

Forgot one!  Does Jocular expect a naming convention for the narrowband subs, ala the RGB subs?

Link to comment
Share on other sites

1 hour ago, Steve in Boulder said:

 Martin, I plan to start experimenting with the narrowband features soon.  A couple questions:

  • How is the palette determined?  Is there something to configure in the JSON file?
  • Can one do, for example, an L + Ha stack, pause the stack, switch to L + OIII, and resume stacking (perhaps just adding OIII subs)?

There currently is no palette ie no HSO/SHO etc. It isn't yet implemented. I need to think about the best way to do this.

The only thing that is implemented is L + narrowband, but that is a single narrowband, not multiple ie no L+ha+oiii

Something to add to the to do list...

  • Like 1
Link to comment
Share on other sites

18 minutes ago, Steve in Boulder said:

Forgot one!  Does Jocular expect a naming convention for the narrowband subs, ala the RGB subs?

At present, unless defined in the FITs header, the only narrowband supported in the watched folder is Ha, and the name has to start with 'ha' or end with '_ha' (in any mix of upper/lower case). I can (extremely easily) add support for the other channels, just that nobody has done anything with narrowband yet via the watched folder and I didn't notice their absence since the way I use it is via direct control....

 

  • Like 1
Link to comment
Share on other sites

2 minutes ago, Steve in Boulder said:

Ah, I was presuming there would be a non-grayscale color assigned to the narrowband.  Otherwise one could just stack L and Ha subs as mono.  

There is (red). What I mean is that no 'palette' for multiple narrowband (ie no HOS/SHO/HOO etc). You end up with something like this, great if you like pink but I think it needs more consideration...

image.png.b95ffaa535575fc82314990680b31334.png

  • Like 1
Link to comment
Share on other sites

This may be, and likely is, a stupid idea, but could Jocular simply read a palette color from its configuration file for each narrowband?  Then use combinations of narrowband filters as I suggested above.  That is, stack L + Ha subs, then switch to L + OIII and continue stacking, with each narrowband sub assigned its configured palette color.  That could implement a multi-narrowband feature with minimal effort.  Unless it’s a stupid idea. 

Link to comment
Share on other sites

12 minutes ago, Steve in Boulder said:

This may be, and likely is, a stupid idea, but could Jocular simply read a palette color from its configuration file for each narrowband?  Then use combinations of narrowband filters as I suggested above.  That is, stack L + Ha subs, then switch to L + OIII and continue stacking, with each narrowband sub assigned its configured palette color.  That could implement a multi-narrowband feature with minimal effort.  Unless it’s a stupid idea. 

What starlightlive did was map 3 narrowband channels to RGB in a flexible way. What I'm not sure about is what happens in LRGB when you have say L+Ha+Oiii? It would indeed be easy to map to RGB -- just not sure what is the right thing to do when one only has one or two narowband channels...

Edited by Martin Meredith
  • Like 1
Link to comment
Share on other sites

2 hours ago, Martin Meredith said:

What starlightlive did was map 3 narrowband channels to RGB in a flexible way. What I'm not sure about is what happens in LRGB when you have say L+Ha+Oiii? It would indeed be easy to map to RGB -- just not sure what is the right thing to do when one only has one or two narowband channels...

I don't know how LAB color space affects this, or the addition of the L subs, but as a user I think I'd want to be able to specify what color (in RGB terms) each narrowband should appear as.  So if I wanted to see Ha as red and OIII as green, so I could easily tell the signals apart, that's what I'd get.  

Link to comment
Share on other sites

I did a little experiment to see if it's reasonable to reduce the total time spent on RGB subs when using LRGB mode, the theory being that L gives the most detail and RGB is there for flavoring.  Here are three views of M51 (please bear with my choices for stretching, sharpening, noise reduction, and saturation, even if not to your taste!).  They seem pretty close to me.  Maybe this wouldn't work as well for other sorts of targets.

 

 

Messier 51 13Apr22_11_08_43.jpg

Messier 51 13Apr22_11_17_26.jpg

Messier 51 13Apr22_11_21_25.jpg

  • Like 1
Link to comment
Share on other sites

Interesting sequence of images. I see some differences but that might be more due to the stretch, NR etc than to the estimated RGB values. I find gamma or asinh works best for colour.

For bright objects (and bright regions of objects), the SNR is going to be reasonable to start with, so the RGB values that are estimated from even a single short sub in each of RGB might be close enough to their 'true' values to produce decent colour, with the detail provided by the luminosity data. For fainter regions the RGBs will be noisier, leading to less veridical colour rendition. I've noticed that for low surface brightness PNs the colour noise can be quite bad (and for many galaxies).

You can probably observe this effect in open clusters, where the bright members need very little RGB while the fainter ones need more. 

By default in Jocular RGB channels are binned 2x2 to increase SNR -- perhaps I should introduce more extensive binning?

 

 

Link to comment
Share on other sites

Yes, the noise reduction from color binning occurred to me when thinking about this approach.  I tend to like a punchy (high-contrast) image, thus the hyperstretch (for brightness) and bg slider at maximum (i.e. the - mark).  Here's the first one with asinh stretch but otherwise similar settings.  

Messier 101 14Apr22_07_34_26.jpg

  • Like 1
Link to comment
Share on other sites

I've just uploaded a new version (0.5.6) to the Python package repository. Main changes/improvements:

  • [speedup] star extraction is now significantly faster; this will be noticeable particularly for those using larger pixel count sensors, and anyone who is loading previous captures will benefit; the degree of speedup can be controlled using the binning option in the conf/aligner settings screen
  • [speedup] denoising (formerly known as TNR) is significantly faster; the degree of speedup is selected by choosing the degree of binning on the conf/Monochrome settings panel
  • [enhancement] sliders (e.g. white point) previously resulted in continuous display updates; for larger sensors this resulted in laggy behaviour. Thanks to Steve in Boulder, there is now the option to update only when the slider is released. This is achieved by double-clicking the relevant slider, and the behaviour persists until the slider is double-clicked again. There is also a configuration option on conf/View that allows all sliders (except zoom and annotate) to update on touch up by default (restarting Jocular is required for this to take effect) 
  • [GUI change] rather than toggling the display of information panels via the eyepiece ring, there is now a separate 'info' screen that is accessed via its own 'info' icon; this frees up space on the eyepiece ring and allows for future expansion of information panels (e.g. FITs info)
  • [GUI change] the sliders that control sharpness, denoising, gradient removal and background have now been regrouped and renamed; the light touch noise reduction slider has also been moved and continues to be called 'nr'
  • [enhancement] in addition to automatic blackpoint detection, there is now automatic whitepoint detection. This is less useful but may find some uses. Both black and whitepoint estimation are toggled using the 'est' buttons at the extremes of the white/black slider (note that the button for blackpoint detection was previously labelled 'auto')
  • [enhancement] colour filters can now be binned 3x3 and 4x4 as well as 2x2 
  • [enhancement] filter information is now externalised in file filter_properties.json in jocular/resources. This is to allow users with nonstandard filters to have them recognised by Jocular. FITs handling of filters has also been updated to include Oiii and Sii (thanks to Steve in Boulder).
  • [GUI change] a few font sizes have been reduced to support those using smaller screens (note that most font sizes can be changed dynamically via the conf/Appearance settings panel).

You can upgrade to the new version using

pip install --upgrade jocular==0.5.6

Please note that the documentation available on the official website has not yet been updated to reflect these changes.

  • Like 3
Link to comment
Share on other sites

I notice in retrospect that Berry & Burnell (p. 488) mention the use of longer integration periods for L than for RGB.  As they discuss (p. 478), this works because "the human eye and brain readily perceive noise and errors in luminance, but are relatively insensitive to noise and errors in chrominance."  

Link to comment
Share on other sites

Apropos of the luminosity/chromaticity distinction, I spotted in Bracken's Deep Sky Imaging Primer a mention of adding synthetic luminance (formed from a weighted sum of RGB) to the actual luminance, to decrease luminance noise even further. I ought to do this (I currently use synL when there is no real L). It sounds like using the same info twice but I guess it is ok.

For the EEA context, to avoid going down the slope towards AP, I think it is necessary to do quite a bit of chromaticity manipulation automatically. For instance, colour gradients are (optionally) subtracted from each channel, but there is no option to control the degree of subtraction as there is with the luminosity control. Similarly I only provide a single colour stretch family (corrected gamma) rather than the various options available for luminosity.

Still, I sometimes feel that there is room for more chromaticity control... I removed the blue-yellow and green-magenta oppositions that corrected the background as they didn't seem necessary -- the automated gradient removal seems to do the trick. Its all a bit lonely for chromaticity at present with just colour stretch and saturation 🙂 

BTW I've started work on automatic detection of G2V stars and now need to compute the colour ratio corrections robustly. One difficulty is not having a good record (in all cases) of the altitude of the detected G2V star, which is necessary to build a decent map in order to allow G2V correction in cases where there is no G2V in the image (the majority of cases are like this). 

 

  • Like 1
Link to comment
Share on other sites

By coincidence, I'd just been looking at the color correction material in Berry & Burnell.  Instead of reference stars, perhaps there's an alternate though cruder approach.   Determine the altitude from the platesolver and find the relative ratios for atmospheric transmittance for R, G, and B.   The QE for each R, G, and B passband can be found from the published graphs for the camera and the transmittance for the color filters likewise.  

I intend to start down this path next session by simply adjusting exposure times for each image based on the relative areas under the QE graph for R, G, and B.  Just by eyeballing the QE graphs B&B's chart of atmospheric transmittance (Table 17.3), the QE adjustment will dominate above, say, 45 degrees in altitude. 

Link to comment
Share on other sites

It will be interesting to see how you get on. It sounds like it could get quite complicated, since the QE varies with wavelength and the RGB filters are reasonably wide. I imagine you'd need to know the spectrum -- approximately, or at least the colour temperature -- of each star in order to refine the QE estimate, or perhaps the mean QE over the passband will suffice.  There are also alternative approaches to G2V that make some assumptions about the average colour temps of stars in a typical FOV.

I'm currently working on FWHM estimation so that I can use aperture photometry to better estimate the RGB responses for G2V stars. Berry and Burnell is a goldmine for this type of stuff. I'm following their approach (fig 8.5, p 257) but fitting a Moffat function to the star profile. At the very least this will provide an estimate of the seeing...

Link to comment
Share on other sites

I made my first attempt last night.  I used a very crude approach -- comparing the midpoints of the curve within each passband -- and wound up with 12 or 13 seconds for B, 10 for R and G.  Here's an example.  Not sure it was really worth the (rather lame) effort.  Despite the additional integration time, I was having some alignment issues with the B subs.  Some thin clouds had moved in and the moon was coming up, not sure if that could be the reason.

 

 

Messier 82 19Apr22_00_05_22.jpg

Link to comment
Share on other sites

24 minutes ago, Martin Meredith said:

 

I'm currently working on FWHM estimation so that I can use aperture photometry to better estimate the RGB responses for G2V stars. Berry and Burnell is a goldmine for this type of stuff. I'm following their approach (fig 8.5, p 257) but fitting a Moffat function to the star profile. At the very least this will provide an estimate of the seeing...

You could also use FWHM for automatic sub rejection, perhaps.  In lieu of manual sub rejection, that might serve as a "Jocular Lite" for memory-challenged users.  

Link to comment
Share on other sites

9 minutes ago, Steve in Boulder said:

You could also use FWHM for automatic sub rejection, perhaps.  In lieu of manual sub rejection, that might serve as a "Jocular Lite" for memory-challenged users.  

I think we can still support manual sub rejection/reselection even in the case where individual subs are not stored in memory, as the key thing is to be able to add or remove a sub from the stack, and for linear combination this can be done without access to the individual subs that made up the stack, so long as we're talking about a mean stack. What would be more difficult to support is median or percentile clipping, so satellite removal etc might have to be sacrificed. (Even in this case it could be done by reading in fractions of each sub multiple times -- slow but produces a similar end result so long as alignment info is also stored).

I'm actually interested in combining aperture photometry with known GAIA G magnitudes for the stars I use for plate-solving. They will provide the necessary reference magnitudes so in principle it ought to be possible to deliver magnitudes for other stars in the FOV.

  • Like 1
Link to comment
Share on other sites

Did you try the 'reshuffle' realign process to handle any alignment issues with B subs? As you may know, this works in part by reordering subs so that the first one comes from the lowest transmissibility filter. This is done because I estimate the star detection parameters for the first sub and then apply these throughout, so if they are set to e.g. extract N stars on the first (B) subs, that is going to be plenty for the other (more transmissible) filters. I find it almost always resolves any alignment issues that are due to low numbers of stars -- except perhaps in the case of cloud-induced loss!

Link to comment
Share on other sites

Just now, Martin Meredith said:

Did you try the 'reshuffle' realign process to handle any alignment issues with B subs? As you may know, this works in part by reordering subs so that the first one comes from the lowest transmissibility filter. This is done because I estimate the star detection parameters for the first sub and then apply these throughout, so if they are set to e.g. extract N stars on the first (B) subs, that is going to be plenty for the other (more transmissible) filters. I find it almost always resolves any alignment issues that are due to low numbers of stars -- except perhaps in the case of cloud-induced loss!

Yes, I needed to reshuffle to have any success at all, and I also turned down the number of stars for the first sub.  Even then, I was losing some B subs.  

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.