Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

A quick midsummer session


Martin Meredith

Recommended Posts

Hard to believe that the nights are drawing in again starting next week after what has been a lousy start to the year here... Just time for a quick session last night, the first for about 5 weeks. Here's a series of screenshots from the live session. Object details are on the screenshots apart from the first, which is M104. The entire session used StarlightLive as a capture engine running non-stop, and my software picked up the FITs from a monitored folder. Quattro 8" f4, Lodestar X2 mono, alt-az mount, no filters. No darks nor flats applied, just bad pixel removal.

M104.thumb.png.21871f24b87e2101d1a47ec9d311fe33.png

Exposure length is calculated automatically but I'm still working on making this robust (hence the ??). This M12 was based on 5s subs.

M12.thumb.png.5e8b3646221b49b4e398859c21f7c30e.png

Here's the Black Eye Galaxy (M64) shown inverted. Exposure is actually 7 x 15s.

M64invert.thumb.png.f34fb10d088675f2f53b422d3c4f4288.png

And non-inverted, 11 x 15s

M64.thumb.png.8b6cd0fb0e45c10263a3cb2bcc7ffd94.png

This is a galaxy pairing I'd not come across before: NGC 5846 and 5850 in Virgo near the border with Serpens. The distances are quite different (88 vs 125 MLyears) This is quite a long exposure (37 x 15s) as I wanted to see the faint structure around the barred spiral emerge (I gave this one a really strong stretch, much easier to tolerate when the image is inverted). The elliptical is actually two ellipticals, NGC 5846A being the smaller of the two; the distance estimates for the pair of ellipticals also suggest no clear relationship (88, 113).

NGC5846pair.thumb.png.1fd7eaf41cd4870a8d608585aab66901.png

Finally, a few of Arp's peculiar galaxies:

Arp 49 in Bootes (21 x 15s)

Arp49.thumb.png.c542c491987e5a00a38563531a2ea464.png

Arp 72 in Serpens

Arp72.thumb.png.ab87799c41d62994ba6423427f15961f.png

and Arp 336 in ursa Major (10 x 30s), the so-called Helix Galaxy. This is a polar ring galaxy where the galaxy is surrounded by a ring of gas nearly at right angles.  You can just about make out the ring in this shot. High clouds were coming and going throughout the session and this one seems to have been affected more than most.

86366996_ScreenShot2018-06-18at00_23_51.thumb.png.2b6c81f1ac93619debfd7a56df8cd1d2.png

 

===

Thanks for looking

Martin

 

 

 

 

 

 

 

Link to comment
Share on other sites

Hi Martin

Very nice variety of objects.  I really like the inverted view of the M64 and the ARP galaxies.  Good to see how your new software is working along with SLL.  It is still amazing to me how good views can be seen with no darks, flats, or no filters- just bad pixel removal.   

I know what you mean by the weather.  I have now been out once in the past 3 months and more bad weather on the way.

All the best with continued success using your new software.

Bruce

 

 

Link to comment
Share on other sites

Martin, Many thanks for sharing, these are quite excellent. As a visual observer I am continually impressed with the capabilities of video astronomy. I really must look more into this and give it a go. I agree about the weather, this year has been poor. I am only averaging 10-12 hours a month at the eyepiece. I am wondering about how video might perform on nights when it is clear but the moon prevents visual, but video might still reveal some detail in DSOs?

Link to comment
Share on other sites

Thanks Bruce. I did take some flats but that part of the tool is in the middle of a rewrite so I didn't use them. I'll construct a master and see how much difference they make. Certainly the sensor is very dusty at the moment but it doesn't seem to show on these images. There is also (automatic, but tweakable) gradient removal used on these images which does make quite a difference when the stretch is extreme.

Thank you too, JAO. Video works pretty well in most conditions (compared to visual where the moon stops play), although the quality will vary. But you will be able to see a lot of detail in most DSOs even with the moon out. Most of the shots above were taken in the same quarter of the sky as the moon in fact. Inversion of the image really helps when there is LP/moonlight as the noise becomes less noticeable and you can push things further to pull out details. You can always do narrowband when the moon is out too.

Martin

Link to comment
Share on other sites

Some very impressive captures of some lovely objects. Your software is coming along nicely ?. Interesting to hear that inverting makes the image more tolerant to extreme stretching, something I look forward to being able to do on the fly. Also good to hear that your tool has automatic gradient removal - I find gradients far more of a pain when I am doing multispectral captures for some reason, not sure why, do you think gradient removal in your tool will work for multi-spectral?

Link to comment
Share on other sites

Thanks DP, Roel & Rob. The inversion I find so useful that I've 'promoted it' to a top-level command (double clicking the image). And I've developed a new stretch that is mainly used with inversion.  

Gradient removal has a nice side-effect of background estimation, so it is possible to automatically set the black point, at least for a starting guess. The other indispensable control is the short line sticking out at 9 o'clock, which is like a ratchet, and makes very fine adjustments to the black point up and down. That's when I notice the gradient removal most. Setting the correct black point is a nightmare with a strong stretch and a gradient. I've not started on the multispectral yet but I'd hope the same principles behind the gradient removal approach will work there too. v1 will be mono only as I want to do a thorough job on the colour side later.

Actually, nearly all the mono processing side is complete. It is possible to run an entire session just using the interface as you can see in the screenshots, along with the occasional (hideable) widget. The aim is to have a distraction free interface to allow concentration on the DSOs at hand. 

It is mainly the offline mode that is taking time at present (organising previous captures, reloading/processing etc). But it is getting there slowly. Then I have to figure out how to make a distributable application....

Martin

1530370499_ScreenShot2018-06-18at10_58_08.thumb.png.d86c8cdc1a865f97e8b3f4af1e2d72d0.png

 

Link to comment
Share on other sites

@Martin Meredith This is looking all very impressive and exciting.

Sorry to say  (and it seem’s I might be in a minority here...) I’m not going to lie though... I find the circular image a little skeumorphic for my taste because you seem to be throwing out quite a lot of the sensor information to have it — what if you’re imaging a galaxy pair on two opposite sides of the frame! Or a large nebula...?

Having said that, it is true most of the time at my focal lengths most things are quite small in the center of frame.

Either way - very excited to try this out when you manage to figure out your distribution!

I also love the inverted option. I have that set up as a quick access shortcut on my Mac screen too for use on SharpCap (iPad and parallels in case your confused).

What is your main intention with the software, how does it differ - just in terms of why are you creating it rather than just using SLL or SharpCap? Is it UI, adding features, automating things...

Software development is a lot of work, so you must be highly dissatisfied with something!!!

Link to comment
Share on other sites

Hi David

I had to look up skeumorphic (as you can see I am not a GUI designer...). I read it is making comeback though and I think what I'm doing fits more with the modern approach (lack of shadows, excessive borders etc etc). At least I hope so as I prefer minimalism myself. Regarding the eyepiece/camera lens style view, the way it came about was a desire to allow non-ugly free rotation of the image, since I do a lot of star-hopping using charts, and it is a lot less disruptive to orient the display than the camera. But having implemented it, many more advantages became apparent e.g. reducing the visual impact of certain aberrations (tilt, coma, vignetting), and providing an unobtrusive yet extensive canvas for display controls. And I find that the circular aperture has a strong connection with the idea of observing, while a rectangular canvas does not. Having said that, you can easily see the entire rectangular image by unzooming, and you can pan with touch or with the mouse to get access to the entire image -- no pixels are lost.

Re the motivations for a new tool and new features...Programming being the art of the possible, I guess I'm mainly interested in seeing how I can streamline my own workflow while incorporating new functionality. I firmly believe that nearly all AP processes can be incorporated into a live system, and that we are at an early stage of doing so in current EAA software. One piece of design philosophy that won't be apparent in the screenshots is that no decision in the tool is irreversible. Should I have used darks? Would I have been better with median stack combination? Why did I allow that blurred sub to get added to the stack? What would have happened if I'd used different alignment parameters? Is it failing to stack because the first frame had poor reference stars? The tool actually maintains the complete stack internally and allows the user to view and edit it, or realign from any given sub (this is done via the part of the ring at the base of the screen). It will in the near future allow a full reprocess when anything is changed, even in the middle of a stacking session. The challenge is to do this in the simplest possible way without interfering with what should primarily be an observing experience. 

As for other functionality, it currently supports organisation and reprocessing of previous observations, observing lists, session and object logging, automated dark and flat library acquisition and application, gradient removal, hot pixel removal, 7 or 8 parameterised stretch families, various stack combination options including percentile-based approaches etc. It will at some point support one-button press RGB and LRGB captures, a range of colour models, G2V and other colour calibration schemes;  plus image annotation. I've only ever used SLL so I don't know how much of this is novel. My criterion is a selfish one: is this something I would find useful? The whole thing will be open source (Python + Kivy). It is good to have options (esp. on a Mac)!

Martin

 

 

Link to comment
Share on other sites

Well I have to say that all sounds very impressive. I can’t wait to try it!

The circular tool makes more sense to me now — especially if pixels aren’t lost and you can zoom out. Also, I would imagine throwing the image on a second screen full screen is a relatively easy thing to do, then you have a tool for viewing and a tool for manipulation.

To be fair — your circular feature layout doesn’t look that skeumorphic - it’s quite flat. It’s more the concept of the circle=eyepiece. But if it has reason to be that way from function, not just so it looks like something physical, then my accusations of skeumorphism are probably unfair!

I’m very interested in the automation you are implementing. I did AI at university along with some machine vision stuff — a long time ago now, but I have wondered about some kind of ai solution to stacking and stretching since it seems like a mathematical puzzle to simply maximise the representation of signal in the image and minimising noise. Assuming we’re not mainly in the business of making pretty pictures... 

The feature list is pretty impressive. The completely reversible functionality sounds quite a radically departure and improvement from any other eea software. That is also very exiting.

Also good is an annotation engine. I’ve struggled with a lot of different bits of software to give me a no good experience with annotation. PixInsignt a possible exception — barring the price.

Have you tried Observatory on the Mac? I tried to like it but found it cumbersome with a strange paradigm I couldn’t quite click with. I have been tempted to retry it though. Lightroom is good, but it could do with astronomy tools.

What is novel... I mainly use sharpcap and firecap. But from my experience, the observing lists is novel in an eea capture software. However, SharpCap can control the mount and plate solve for you. Darks and flats application are in sharpcap. But you have to select them from your library. It’s not automatic. Hot pixel removal is automatic in sharpcap. There is defiantly some overlap, but I think you are building an eea too from the ground up, whereas sharpcap was originally an imaging tool that now does eea. 

And Mac software is very welcome! I caved and have been running things from my Win10 compute stick and through Parallels. But a native Mac app would be great. 

Link to comment
Share on other sites

23 hours ago, Martin Meredith said:

Having said that, you can easily see the entire rectangular image by unzooming, and you can pan with touch or with the mouse to get access to the entire image -- no pixels are lost.

Good questions and answers from David and Martin. It seems there is more to this s/w than meets the eye, I guess a full understanding can only be gained by having a play.

At the risk of starting a list of 'change requests' before you have even released it, it seems like it would be useful to be able to hide the controls/eyepiece view, so that the image underneath is displayed in full, ie: toggle it on and off. 

Are you planning to integrate it with Pretty Deep Maps? :) 

Link to comment
Share on other sites

Hi Rob

No problem with feature requests at this stage. They're probably a lot easier to implement while things are still flexible... Switching to display the full frame ought to be feasible.

I have no plans to integrate with the maps (9G is just too much to do a lot with) but I probably will at some point include an auto-annotate option which will use some subset of this data. I should also say that I don't plan to implement native captures at present, nor polar alignment/mount control/plate-solving. Not do I see the software as being useful for sensors with huge numbers of pixels or at very fast rates in the first instance (due to the processing/memory load). The tool will occupy a different feature space from others at least to start with.

Martin 

Link to comment
Share on other sites

On 21/06/2018 at 13:24, Martin Meredith said:

But having implemented it, many more advantages became apparent e.g. reducing the visual impact of certain aberrations (tilt, coma, vignetting),

That was my immediate reaction, which also suits EAA against imaging, where you are looking at focusing on an object rather than producing a finished image.

I do like the interface and can see it appealing to an audience, especially as it 'looks' like the live view from a scope.

That said the f4 scope gets good data fast and surely it's worth squireling it away for traditional imaging on the cloudy evenings ?

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.