Jump to content

CraigT82

Members
  • Posts

    4,185
  • Joined

  • Last visited

  • Days Won

    4

Everything posted by CraigT82

  1. Thread over on BAA here where someone mentions there were told by celestron not to use them as mirror locking screws… https://britastro.org/forums/topic/c14-mirror-flop
  2. I think you mean the Altair 290c? The main difference between those two is that the asi224 is a USB3 camera and so you’ll get faster frame rates than the GPCAM2 which is a USB2 camera. For planetary imaging high frame rates are key. Altair do a GPCAM3 290c which is USB3 and so that would be a good choice. I used to have the mono version of this one and it was a great camera. Also had the Altair 224c and that was a good un too.
  3. John the best way to enlarge an image is to do it right at the end of the processing, once you have aligned, stacked and sharpened the image. I don’t know what you use for image finishing (photoshop?) but in GIMP you can enlarge for shrink the image by any factor using the ‘Scale Image’ tool under the ‘Image’ tab (shown below). You just enter the new image size in pixels that you want, so for a 2x enlargement of this particular image I would enter 3,292 into the circled box (1646*2) and then hit the ‘Scale’ button. Edit: Should probably add a note on the interpolation method for the scaling. Choosing no interpolation method will result in a blocky ‘pixelated’ appearance. In GIMP the Cubic method gives a nice smooth interpolation. Here is an 800% image of the two methods, ‘cubic’ top and ‘none’ bottom:
  4. Well, it depends really. It’s a good idea to filter the incoming light rather than capturing the full spectrum, but which wavelengths to let through will depend on the seeing. When seeing is poor then the longer wavelengths are better as they are less effected by the poor seeing: red or even infra red pass filters are good. When seeing is good then it can pay off to use shorter wavelengths as they will allow the scope to resolve finer detail. I’ve got good results using green filters in good seeing and even blue filters in excellent seeing. This is a comparison I did in good seeing a couple of years ago with my old 8.75” F/7.5 Newtonian, all the captures are sharp but the green filtered capture clearly shows the finest detail: Here is another on shot through a green filter in good seeing with the same scope:
  5. John I promise you will get better results by stacking in Autostakkert 3 than you will from registax! It’s well worth learning. As sorrimen says it’s not that involved. You can take the stacked tiffs from AS3 and load them into registax to apply wavelets
  6. Some fabulous NIRCam images of Jupiter from JWST released today, with different lengths of NIR mapped to RGB. https://esawebb.org/images/
  7. You don't have to use either, I don't. My workflow is Firecapture (capture) > Autostakkert 3 (registration and stacking) > Astrourface (Sharpening) > Affinity photo or Gimp (finishing). PIPP is still a very handy bit of software though and I do use it for some things, usually to automatically reject frames without a planet or an incomplete planet due to windy conditions as AS3 can struggle with those frames.
  8. The advantage of using RAW mode is smaller capture files and higher frame rates. The camera will always capture colour information, and when set in RGB24 mode it will debayer the images itself and write a 24 bit colour file to the hard drive (8 bits red, 8 blue and 8 green = 24 bits). Because these files 3x larger in size the capture speed will slow and you'll get a much reduced FPS. When set in RAW8 mode the camera doesn't do any debayering, just sends the raw data to the hard drive as a B&W file but it does contain colour information locked up in the bayer pattern. Sharpcap will see only a B&W image too because it doesn't do the debayering either (although you can set it to debayer the preview image if you want to - some find it easier to focus with a colour image). This B&W 8 bit capture then needs to be debayered to produce a colour image, you can do this in PIPP but AS3 does it automatically as part of the stacking process.
  9. What he said 👆🏻 Did the OAG prism move at all when you were swapping them over?
  10. Regarding exposure and gain settings for capture, there is a great feature is sharpcap (pro - well worth £12 imo) called smart histogram. If you do a sensor analysis prior to using it it will use that analysis to give you a live readout of your SNR when you mouseover the histo peak. By altering your exposure and gain you can see what gives you the best SNR. (Hint: the best SNR values come from longer exposure and lower gain). The gotcha here though is that you have an upper limit on your exposure length which is the seeing. In good seeing you can use longer exposures and in poor seeing you will be forced to use shorter ones. The idea really is to use the longest exposures the seeing will let you get away with. And don’t worry about then increasing the gain to get the histogram up to 75% or whatever, that doesn’t matter. Keep the gain low and If the image is dim then that’s fine, you can brighten it later in post processing (eg. By using the ‘normalise’ function in Autostakkert). You don’t want to go too low with the gain though as if the noise on the image is very low that can cause issues further down the line after stacking - the stack may retain a lower bit depth which can lead to artefacts after using wavelets. Edit: Just thought I should add - at the risk of overcomplicating things - that it’s the SNR of the final stack that we are concerned about as that is what we will be sharpening. It is generally good to to aim for higher SNR in your sub-exposures, but it must be considered that although shorter exposures would have a lower SNR you may be able to stack a lot more of them into the final image, due to the faster frame rates they bring, thereby overcoming the initially lower SNR of the individual subs. What really matters is the total exposure time of the final image, with the longer the better, and that’s where winjupos comes in.
  11. Looks fine to me, though if the 1.25” holder on the flip mirror is a bit poor you might want to upgrade to to something a bit more substantial to hold the power mate and camera more securely (I’m assuming that the holder screws off to reveal a T thread?) The important thing to remember with imaging trains is to collimate through them as much as possible and so any droop is taken into account in the collimation
  12. If the weight of a larger battery pack on the mount is a problem could you go for a smaller and lighter weight battery pack, and buy another one spare to swap in when the first one runs out?
  13. Well that’s quite an ambiguous label! The Edge HD mirror locks look like more than just a simple bolt pushed against the plate I think they’re transit bolts but it’s definitely worth getting confirmation from Celesteron about what they actually are and what to do with them. Sorry I can’t be any more help! There are a few C14 owners on here so maybe someone can chime in with any further info.
  14. That’s interesting I thought the Edge HD versions only had mirror locking screws. Are they definitely not transit screws? What does that tag say in the pic?
  15. As in it won't launch due to weather or some other cancellation, or it will launce but will fail?
  16. Would any 585mc owners be so kind as to share a sharpcap sensor analysis report? Cheers
  17. Firecapture lists Linux support for QHY cameras. If you get stuck you could email Torsten for advice
  18. Yes I think there's differences here between lucky imaging and long exposure imaging. In high resolution lucky imaging the shorter the wavelength the worse the dispersion, and the worse it's affected by seeing. Hence why we use red channel or IR captures when seeing is poor as they remain relatively sharp. For DSO and long exposures I think the resolution is low enough that dispersion isn't an issue and the seeing affects all channels pretty much equally due to the long exposures. Therefore it comes down to the scopes optics as to which channels come out sharpest? That's my guess anyway!
  19. Here is your saturn composed in the order I outlined above. Either your filters are in the wrong holes or you got your channels mixed up at capture
  20. Primary end down. Heaviest end so you don’t want it up or it’ll topple easily.
  21. As your using a Mak the a flip mirror is the way to go. I’d also invest in a cross hair eyepiece to use in it, and also to use when zeroing your finder. I use a newt so a flip mirror is not an option due to lack of infocus travel. My technique is just to use a crosshair eyepiece to set up the finder in the daytime and usually that is fine to get the camera on the chip. If it fails for some reason then the spiral search function on EQmod comes in handy.
  22. Looking at your raw stacks I think you mixed up the filters during capture. I think your R is actually a G, your G is actually a B, and your B is actually an R. You can tell as red is normally sharpest, blue is worst and green is somewhere in the middle
  23. Software! I find it funny when people spend thousands on the hardware and then struggle on with freeware to process their images because they flat out refuse to spend on decent software. Have a look around the the various acquisition/calibration/editing suites available. One thing you’ll probably want to get going on fairly soon is automating everything so make sure your chosen software is compatible with your hardware. The ASAIR is a nifty bit of kit but you’re likes to ZWO products with it. Use your skills to build a mini PC to control everything instead.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.