Jump to content

SkySurveyBanner.jpg.21855908fce40597655603b6c9af720d.jpg

SharpCap - free Astro Webcam Capture Software


rwg

Recommended Posts

"Anyway, when you say 'fell over' do you mean crashed? If so did you get a chance to submit a crash report?"

No - I had to do Ctr-Alt-Del.

I only use Raw16 to get the spikes when using my Mask, otherwise I can't get the spikes at all. No I was trying to get some long exposures of Uranus to see if I could spot the moons but SC just crashed each time I tried i.e. it just froze.

Peter

Link to comment
Share on other sites

  • 2 weeks later...
  • Replies 1.5k
  • Created
  • Last Reply

I discovered yesterday when using the Polemaster for long exposure imaging that there's an issue when using it in Sharpcap. Whenever i set the exposure to longer then 60 sec, the image never updates. Is it possible to fix this somehow?

Btw, awesome work on adding the Polemaster to function as a normal camera! :)

Link to comment
Share on other sites

@Jannis,

actually I can't claim the credit for making the polemaster work as a camera - it's all down to the QHY SDK which now works with pretty much all their models as far as I know.

Unfortunately, that same SDK is going to also be the reason why you get stuck beyond 60s exposure - no idea if it is a bug or just a limitation in the camera though. It might be worth reporting the issue to QHY on their forums.

cheers,

Robin

Link to comment
Share on other sites

Hi 

I have downloaded sharp, and I have tried to use my starlight express loadstar ccd with this software, particulary the polar alignment tool.

Unfortunately, this camera is not on this list in sharpcap and I have tried ascom drivers and another piece of software which was supposed to work

but was unsuccessful. Does anybody know if and when loadstar will be compatible with sharpcap, otherwise I may have to purchase another camera just for the alignment.

 

Regards..... hyp

Link to comment
Share on other sites

What I hope is a quick question - I was looking at the python scripting - I'd like to use the temp reading from an ASI120MM to control an active cooling system - since sharpcap already decodes that temp from the camera can a script simply pull that value and then poke a string to a serial port with an attached arduino to simply on/off the cooling system?

Link to comment
Share on other sites

Hi John,

it's easy enough to get the temperature from the camera via the python scripting

print SharpCap.SelectedCamera.Controls.FindById(CommonPropertyIDs.Temperature).Value

You should be able to write to a serial port too, but in SharpCap 2.9 there isn't a full python library installed, so you have to use the .Net SerialPort class rather than the python serial module. Start like this

import clr
clr.AddReference("System")
from System.IO.Ports import SerialPort
sp = SerialPort()
sp.PortName = "COMX"
// set baud rate, parity, etc
sp.Open()
sp.Write("hello arduino")

All you need to do then is to write a loop round getting the value from the camera and sending it to the serial port - don't forget to add a delay into the loop otherwise you might lock SharpCap up.

cheers,

Robin

Link to comment
Share on other sites

5 minutes ago, rwg said:

Hi John,

it's easy enough to get the temperature from the camera via the python scripting


print SharpCap.SelectedCamera.Controls.FindById(CommonPropertyIDs.Temperature).Value

You should be able to write to a serial port too, but in SharpCap 2.9 there isn't a full python library installed, so you have to use the .Net SerialPort class rather than the python serial module. Start like this


import clr
clr.AddReference("System")
from System.IO.Ports import SerialPort
sp = SerialPort()
sp.PortName = "COMX"
// set baud rate, parity, etc
sp.Open()
sp.Write("hello arduino")

All you need to do then is to write a loop round getting the value from the camera and sending it to the serial port - don't forget to add a delay into the loop otherwise you might lock SharpCap up.

cheers,

Robin

Perfect, thank you!

Link to comment
Share on other sites

The real time plate solve polar alignment feature is INCREDIBLE  I have a Soligor 200mm F3.5 M42 lens threaded direct to my asi120mm.  the lens has a tripod collar to bolt onto a vixen dovetail and allow me to get the best polar alignment on my CGE that I have ever achieved.  Thank you for creating this excellent software!

 

 I was wondering, because it is possible to plate solve real time around polaris, could a further implementation of goto assist be created?  If I could plate solve all of the major bright stars in realtime, on screen with he same recommended 200mm focal length, wow, goto alignment would happen very quickly and accurately.  This would allow use of the hand controller or nexremote during the telescopes built in star alignment routine.  I typically avoid normal plate solving because I have never gotten it to work correctly and end up wasting alot of time, however a realtime option would be excellent.  

Link to comment
Share on other sites

@Calypsob

good to hear that the polar alignment is working well for you :)

I'm afraid that I don't think that I can extend the polar plate solving to the whole sky easily. SharpCap only includes the star data for stars within 5 degrees of each pole - that comes to a total of about 150 square degrees covered. The whole sky is 41253 odd square degrees or about 275 times bigger, which means I'd need to ship about 220 megabytes of star data and the processing time to set up the index would be several minutes for the first solution (goodness knows how much memory it would use!). The other problem would be that in such a big area there is much more chance of false positives - ie incorrect matches.

I definitely want to do something with plate solving in a future - may even the next - version of SharpCap, but probably by sending the image to AstroTortilla or astrometry.net rather than having it built in.

cheers,

Robin

Link to comment
Share on other sites

6 hours ago, rwg said:

@Calypsob

good to hear that the polar alignment is working well for you :)

I'm afraid that I don't think that I can extend the polar plate solving to the whole sky easily. SharpCap only includes the star data for stars within 5 degrees of each pole - that comes to a total of about 150 square degrees covered. The whole sky is 41253 odd square degrees or about 275 times bigger, which means I'd need to ship about 220 megabytes of star data and the processing time to set up the index would be several minutes for the first solution (goodness knows how much memory it would use!). The other problem would be that in such a big area there is much more chance of false positives - ie incorrect matches.

I definitely want to do something with plate solving in a future - may even the next - version of SharpCap, but probably by sending the image to AstroTortilla or astrometry.net rather than having it built in.

cheers,

Robin

Well I am certainly excited to see what you come up with! Thanks for the reply.  

Link to comment
Share on other sites

The next holy grail for mere mortal astronomers is for polar alignment where polaris is not visible.  If it was possible to add one of the local/internet plate solvers and select say M42 stars to check polar alignment that would be a great option.

Of course that depends on the calculation algorithm being able to cope with stars other than polaris.

 

Steve

Link to comment
Share on other sites

@StevieDvd

Alignment without a view of the pole requires a whole different approach sadly.The calculation SharpCap does is based on working out the two frames relate to each other - which point they appear to rotate around. If you were a long way from the pole then this rotation point would be a long way out of frame and the error in calculating where it was would become very large - meaning that the polar alignment calculation would be useless.

There are approaches that avoid this pitfall, but they have to deal with things like dealing with potential cone error and poor pointing accuracy after gotos.

If you can get AstroTortilla to work then it has a drift align like technique where you rotate the mount rather than waiting for the drift - that always seemed quite a good idea to me.

cheers,

Robin

Link to comment
Share on other sites

I wonder if anyone knows what Im doing wrong, I downloaded sharpcap ( mainly for the polar alignment tool) and although it sees my qhy camera it dosnt see or dont connect i should say to my starlight xpress (main camera). i have eqmod so presume i could connect it via ascom but thats grreyed out so no idea

Link to comment
Share on other sites

Just wanted to say that, using version 2.9.3005 the dark frame subtraction is still not working for QHY5L-II mono camera. Either using mono8 or mono16. It seems not to be saving a valid master dark frame (there's hardly any hot pixels in it at all, so subtracting it does nothing).

Instead of getting something like the top image, I get the lower one...

 

  •  

ChrisH

 

DarkFrameComp.jpg

Link to comment
Share on other sites

@ChrisLX200

I can make this work correctly with a range of cameras (Altair GPCAM Mono, QHY5IIC, QHY174M, etc)

test procedure:

Cover camera
choose colour space
Set exposure to 1s
Set gain to max
Capture dark (10 frames)
Show histogram
Select dark - check hot pixels vanish & histogram improves

I do however notice 2 things

1) If I save darks as PNG then sometimes they cannot be re-loaded (depends on the colour space). Saving as FITS seems fine

2) With the QHY5II  camera every time you change colour space, the gain increases by 1. This can get the gain to go above the 100 maximum which seems to stop the gain from working until you reduce it and re-set it to maximum manually.

I definitely get real hot pixels in the darks - see attached...

Maybe you are getting affected by issue 2 above - this seems likely to be a bug in the QHY SDK which I can pass on to them.

cheers,

Robin

dark_10_frames_∞C_2016-11-28T20_34_39.png (QHY5IIC, 1.0s, 100 gain)

dark_10_frames_∞C_2016-11-28T20_34_39.png

Link to comment
Share on other sites

6 minutes ago, rwg said:

I can make this work correctly with a range of cameras (Altair GPCAM Mono, QHY5IIC, QHY174M, etc)

test procedure:

Cover camera
choose colour space
Set exposure to 1s
Set gain to max
Capture dark (10 frames)
Show histogram
Select dark - check hot pixels vanish & histogram improves

I do however notice 2 things

1) If I save darks as PNG then sometimes they cannot be re-loaded (depends on the colour space). Saving as FITS seems fine

2) With the QHY5II  camera every time you change colour space, the gain increases by 1. This can get the gain to go above the 100 maximum which seems to stop the gain from working until you reduce it and re-set it to maximum manually.

I definitely get real hot pixels in the darks - see attached...

Maybe you are getting affected by issue 2 above - this seems likely to be a bug in the QHY SDK which I can pass on to them.

cheers,

Robin

dark_10_frames_∞C_2016-11-28T20_34_39.png (QHY5IIC, 1.0s, 100 gain)

My procedure is fairly simple. I only use Mono8 usually, only recently did I mess with Mono16 when things weren't behaving. Anyway, that was a fail so I'm just staying with Mono8 all the time now.

I set my exposure to 8 sec, Gain to 30, cover and capture 10 frames. The capture completes.  Without uncovering the camera I select the created dark frame. No change!! The hot pixel pattern is still there.

So I open the created dark frame and compare it with one created back in July this year - the difference you see in the images I posted above. If instead I select that old dark frame then dark frame subtraction works OK. This was just a routine procedure back then, I would create a range of darks (4, 8, 15, 20, 30sec) and simply choose the one matching the exposure time I'm using. I never changed the gain. The system was bullet-proof, never failed. Now for the life of me I just can't get it work no matter what.

 

Link to comment
Share on other sites

Hi Chris,

well, I just tried those steps with my colour QHY5II (in RAW8 rather than MONO) and everything seems fine. Very very odd. Please can you share a single captured 8s frame from your camera and the master dark frame you get created with me (if they are too big to PM directly then you could share via dropbox/google drive/etc and PM the link). It might be a good idea to try that in both PNG and FITS to see if that makes any difference (change the preferred still file format in the settings menu).

I will then have a look at the files in detail and see if that reveals any clues.

cheers

Robin

Link to comment
Share on other sites

6 minutes ago, rwg said:

Hi Chris,

well, I just tried those steps with my colour QHY5II (in RAW8 rather than MONO) and everything seems fine. Very very odd. Please can you share a single captured 8s frame from your camera and the master dark frame you get created with me (if they are too big to PM directly then you could share via dropbox/google drive/etc and PM the link). It might be a good idea to try that in both PNG and FITS to see if that makes any difference (change the preferred still file format in the settings menu).

I will then have a look at the files in detail and see if that reveals any clues.

cheers

Robin

Thanks for taking the time Robin, I will do that a little later - I'll generate a new set.

Cheers,

Chris

Link to comment
Share on other sites

Richard,

While there is only really the moon to look at as far as planetary imaging is concerned, I've turned my attention to learning a bit more about imaging DSO's - using a ZWO ASI224 with Sharpcaps LiveStack feature.

Not sure if this has already been requested ( and scanning through 50+ pages here is a bit of a trial) but:-

 

When using live stacking would it be possible to add a feature to automatically save the stacked image every 'x' frames?.

 

Thanks

Neil

Link to comment
Share on other sites

5 hours ago, ngwillym said:

Richard,

While there is only really the moon to look at as far as planetary imaging is concerned, I've turned my attention to learning a bit more about imaging DSO's - using a ZWO ASI224 with Sharpcaps LiveStack feature.

Not sure if this has already been requested ( and scanning through 50+ pages here is a bit of a trial) but:-

 

When using live stacking would it be possible to add a feature to automatically save the stacked image every 'x' frames?.

 

Thanks

Neil

Sorry, the above request may seem a bit vague - waht I'm looking to do is save a sequence of live-stacked images:- i.i

Set number of frames to stack = x and sequence to y

i) stack x frames

ii) Clear stack

iii) repeat y times.

 

hope this helps

Neil

Link to comment
Share on other sites

Hi Neil,

this is just the sort of thing that I added the scripting language to SharpCap for - specialist features like this don't have the wide appeal that would push them to the top of my todo list, but you should be able to write a python script in SharpCap to do what you need.

For instance

SharpCap.Transforms.SelectTransform("Live Stacking")

will enable live stacking

print SharpCap.Transforms.SelectedTransform.FrameTransform.TotalFrames

shows the number of frames that have been stacked

SharpCap.Transforms.SelectedTransform.FrameTransform.SaveFrame(16,False)

will save the current stack as a 16 bit depth fits file (use 32 for the first parameter to save as a 32 bit file)

SharpCap.Transforms.SelectedTransform.FrameTransform.Reset()

clears the current stack and starts over

You can also play with most of the live stacking options in the same way - if you go into the scripting window it will auto suggest the actions you can take with each object, so for instance if you have live stacking running and you type in ' SharpCap.Transforms.SelectedTransform.FrameTransform. ' as you type the final '.' the program will show about 20 different actions or values  that can be carried out or changed on the live stacking.

Hope that helps a bit.

Robin

Link to comment
Share on other sites

Richard - thanks for the above - time to learn Python methinks.

 

On another tack:-

What are the advantages/disadvantages of capturing FITs files in RAW8, RAW16 or RGB24 - when I'm using a ZWO ASI224?

From tests I've done - images captured as RAW8 & RAW16 need to be debayered in something like PIPP before I can put them into DSS and get colour images - otherwise - putting them straight into DepSkyStacker (or autostakkaert or even Registax) they are reported as 8/16 bit gray files and only generate monochrome stacks.

Whereas DSS will quite happily process RGB24 directly into colour stacks without the need for PiIPP.

 

Or am i doing something wrong wheresoever.

 

thanks in advance

Neil

 

 

Link to comment
Share on other sites

On 28/11/2016 at 21:38, rwg said:

Hi Chris,

well, I just tried those steps with my colour QHY5II (in RAW8 rather than MONO) and everything seems fine. Very very odd. Please can you share a single captured 8s frame from your camera and the master dark frame you get created with me (if they are too big to PM directly then you could share via dropbox/google drive/etc and PM the link). It might be a good idea to try that in both PNG and FITS to see if that makes any difference (change the preferred still file format in the settings menu).

I will then have a look at the files in detail and see if that reveals any clues.

cheers

Robin

That old QHY5L-II in my ASC just got 'retired' in favour of a new QHY5III178 :)

IMG_1028_zpsyudtzwvy.jpg

And that seemed to resolve the dark frame issues (almost...) only to be replaced by a couple of other less intrusive bugs! I went for this camera instead of the ASI178 because this one will fit inside the case without mods - but of course QHY drivers are not exactly bug-free are they? I'm finding that sometimes when I change exposure times (say, from 8sec to 15sec) SharpCapt will carry on taking a frame of infitinte length. You can see it counting upwards in the bottom-right corner and I let it go to over 5minutes before closing the program and starting again. Also, the dark frame subtraction in Mono16 mode sometimes results in lots of short streaks showing instead of  removing the hot pixels. Again, restarting the program stops that.

ChrisH

Link to comment
Share on other sites

@ngwillym

There is no more information in RGB24 than in RAW8, it's just been unpacked already from the RAW format for you, which helps make the post-processing easier as you've noticed, but you are also relying on whatever debayer algorithm the camera manufacturer has decided to use (unknown, but probably a simple, quick and lowish quality one).

If you are capturing long exposures then RAW16 might be worth trying as you get the extra bit depth (this is not worth doing if you have the gain turned up high so that noise that is visibly different from frame to frame). Another advantage of taking 16 bit fits files is that you can setup DSS to auto debayer them (look in the FITS tab of the 'RAW/FITS DDP settings').

cheers,

Robin

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.