Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

Oddsocks

Members
  • Posts

    1,251
  • Joined

  • Last visited

Posts posted by Oddsocks

  1. On 23/04/2024 at 00:11, DaveG64 said:

    I have read some of both the forums it seems to me that users seem to have more problems with the paramount than the 10u and the sky x is harder to learn ,also you’ve got to read the manuals thoroughly.

    Hi Dave.

    It's difficult to do a direct comparison between Paramount's and 10 Micron regarding reliability because the service model for Software Bisque provides support only via the user forum, it's virtually impossible to get support for Paramount problems that are away from the public gaze.

    For 10 Micron mounts the user forum is primarily a talking space for users and a repository for information and software etc. Support for hardware problems is direct via email to Baader, or eventually 10u themselves if Baader can't resolve the problem, so you will not be able to compare reliability as we don't know the failure rate for 10u products.

    There is also a volume bias. There must be thousands of Paramounts sold and in use around the world while the large price differential that previously existed between Paramounts and 10u means the volume of 10 Micron mounts in use worldwide will be much smaller. 10u has only recently gained a foothold in the US market which has been dominated by Software Bisque and Astro-Physics for premium mounts for many years.

    Now that Software Bisque products have seen a large price hike over the last year we might expect to see many more 10u mounts sold worldwide.

    While not going into boring detail I did spend almost half as much again on spares and repairs for my Paramount MX Classic over the ten+ years that I owned it as the mount cost me when I bought it new in 2012.

    HTH

    William.

  2. Hi Dave.

    You can create a mount model in NINA, for which there is either a plug-in, or it’s now built-in to NINA, I’m not up to speed on that as I don’t use NINA myself but have read about it on the 10u forum.

    Other options for modelling the 10u include Model Creator, that is provided by 10u, or Mount Wizzard4 that is freeware and that I use.

    I would recommend joining the 10u and Software Bisque forums before committing to either mount and read some of the topics and questions that are current to give you a feel for the different way these products are managed and supported.

    I replaced a 10 yr old Paramount MX Classic with a 10 Micron 2000HPS Combi last year and so far I’m very impressed with the unguided performance of the 10u, something I could never achieve consistently with the Paramount.

    It’s true that the 10u operating software is very basic and end users have stepped in to fill the gap. Per Frejvall, who was a contributor here on SGL until his untimely and sudden death, wrote the Model Creator software many years ago while Michael (Michel) Worion is currently authoring and actively updating Mount Wizzard4.

    Mount Wizzard4 is a much more advanced modelling application than either Model Creator or the NINA 10u plug-in, but you’ll find users of all three on the 10u forum that are happy and comfortable using any of these.

    You’ll find some instruction videos on Michael’s Mount Wizzard4 YouTube channel, although they are a bit disjointed and really could do with a refresh, nevertheless if you watch all the videos for MW4 a few times before using it for real then you’ll be building your first model after just a couple of hours practice.

    Somebody once commented on one of the astronomy forums that if Software Bisque and 10 Micron built cars then a Paramount would be a NASCAR racer and 10u would be Porsche, I thought that was quite an apt analogy. Either product will do what it’s designed for but with quite a different approach. Software Bisque is a software house that decided to make a mount while 10 Micron are built by a division of an Italian precision engineering company.

    The user manual for TheSky is over 850 pages, plus another 230 just for Paramount manual and a further 100 for T-Point. The user manual for the 10 Micron is just 100 pages.

    The Paramount can only be directly controlled and must always be connected to TheSky software, with third party applications such as NINA connecting to TheSky, which then relays mount commands to the Paramount. This means you always need to have TheSky loaded and running even when using NINA, or other observatory or sequencing software.

    The 10 Micron has it’s own software built into the control unit and is just a “black box” to the end user with control of the mount either via ASCOM, or the supplied hand controller, or via any third party software that has built their own plug-in, such as NINA.

    It’s worth pointing out that the Paramount price includes TheSky software plus the licence plugins for T-Point and Cameras, bought standalone that would cost ~$700. If you have a dome and require dome integration with TheSky that is another extra license to buy.

    The Paramounts use brass worms and aluminium gears. As a result of these material choices, and subsequent limited choice of brass-compatible greases, these need cleaning and re-greasing annually, as instructed by Software Bisque in the user manual, and because aluminium oxide continually develops on the gears, even while the mount is unused, it accumulates in the grease as soon as the mount moves and aluminium oxide is an abrasive, if you skimp on the regular maintenance then the worm wear-rate will be high and guide performance degraded.

    The Paramount has no clutches and if you knock the mount accidentally when walking past and the drives are engaged you can deform the brass worm. At best this will require the PEC table to be rebuilt, at worst a new worm drive assembly fitted.

    The Paramount is designed so that the end user can replace virtually all the major components themselves without having to ship the mount back to the factory.

    In contrast, the 10 Micron uses bronze gears and hardened steel worms and requires no servicing or re-greasing (according to them) for at least ten years, after which it can either be returned to the factory for an overhaul, or you can contact them for advice on re-greasing the drives yourself, although I would expect that after ten years of moderate use the worms might be worn out and require replacing anyway.

    The 10u mount has clutches so a gentle knock with the clutches engaged will normally just result in a slip but even If the worm was slightly deformed you might never even notice because the on-axis encoders and closed-loop control system are always re-positioning the axes accurately no matter how imprecisely the motor drives and worms are turning.

    Besides the separate control box, hand controller and saddle plate there are no major user replaceable parts on the 10u and should a drive or other mount internal fault occur you would need to ship the mount back to Italy (or possibly Baader in Germany) for repair.

    So far as mount modelling goes, either using T-Point with the Paramount or Mount Wizzard4 with the 10u, both are mostly automated and just need a few mouse clicks to initiate a model build and then apply it.

    Given the sophistication of TheSky and the total integration of Paramount within TheSky the complete package is quite a big learning curve and mount modelling is actually a very small part of that.

    With the 10u mount having a much simpler software environment the learning curve is much smaller since all the sequencing, plate solving, guiding, etc, is not a part of the 10u package and if you have been using some other application for that, such as NINA, SGP or ACP etc., then you’ll already know most of that.

    HTH.

    • Like 1
  3. 1 hour ago, Rob_Jn said:

    ….BUT on a bit of a whim I tried different USB ports on the Mele 3 and guess what the camera connected straight away!! The port I connected to is the single one on the side with HDMI and not the side with 3 USB ports on. I couldn't believe it, so what is different about that port?

    Probably nothing different in hardware but failed and incomplete driver installs are logged in Windows to a specific port# and moving the camera to a different port appears to Windows as if it is a different device and the driver install (for Windows driver registration) will begin again right from the start.

    Once the camera driver is successfully installed on the new port# then the driver registration is updated for all the previous failed ports (but not added to as yet unused ports).

    Good to hear that QHY responded promptly and that the problem is solved.

     

    • Thanks 1
  4. Spoke to a colleague last night who had similar issues a few years ago and it was a problem with AV software deleting the QHY driver installation files at run time but with no warning, only discovered by looking at the AV logs.

    If you have third party AV installed, or only Windows Defender, try temporarily disabling and then run the installer again.

    I'm currently running the QHY All-in-one 20230509 beta on Windows 11 Pro and a 268M and did not experience any issue installing the original stable version or the 20230509 beta, which I upgraded to only for the new extended 2CMS acquisition curves. I have had no problems at all with the QHY camera running under the ASCOM driver for MaxIm DL and ACP, or Mount Wizard 4, or with my previous mount and TheSkyX Pro or PemPro.

    If you want to verify that your Windows installation is ok before going to all the bother of reinstalling Windows then run the built-in Windows test and repair commands, which will either diagnose and automatically fix any Windows corruption or file system corruption, or report that the OS is damaged and unrepairable and should be reinstalled.

    If you don't know how to access and run the built-in Windows test and repair tools see below.

    -------------------------------------

    Test and repair Windows using Windows built-in tools SFC and DISM.

    Open an Administrator Command Prompt window.

    At the search box on the Windows task bar type: CMD then from the pop-up menu mouse-right-click on the shortcut for Command Prompt App and select "Run as administrator".

    At the command prompt C:\Windows\System32> type: sfc /scannow then hit enter then don't interact at all with the PC until the scan is completed.

    (   that is,  sfc<space>/scannow   )

    For a new PC with virtually no installed drivers or apps the file system scan and repair will take about 10 minutes, for an old PC with lots of drivers and apps installed the scan can take over an hour. After the scan is completed a status message will appear in Command Prompt window saying  problems found and successfully repaired or no problems found, or problems found that could not be repaired. If the latter, problems found that could not be repaired, then restore the computer from the original backup partition or media.

    If the scan status message was problems found and successfully repaired, or no problems found, then next run the DISM tool to verify that the Windows OS itself is ok.

    While still in the Command Prompt window.....

    At the command prompt C:\Windows\System32> type: dism /online /cleanup-image /restorehealth and hit enter.

    (  that is, dism<space>/online<space>/cleanup-image<space>/restorehealth  )

    The DISM tool will inspect all the Windows OS files and if any are found to be damaged it will replace those with undamaged copies from a built-in backup.

    After the DISM tool reports completion, at the command prompt C:\Windows\System32>  type exit and hit enter to close the command prompt window.

    ----------------------------------------

    Assuming that both the SFC and DISM tools completed successfully and no unresolvable errors were found then reboot the computer, temporarily disable any AV software, then try running your preferred choice of QHY All-in-one package again.

    HTH.

  5. From memory, the ZWO camera driver has a USB bandwidth setting (USB Traffic or USB Limit?) and ZWO used to forewarn users that if the bandwidth setting was set too aggressively then horizontal banding may occur at high frame rates.

    I don’t use SharpCap or ZWO cameras myself but I suspect that USB bandwidth setting is still buried in the ZWO driver setup somewhere and may need to be reduced a little at a time until the banding is no longer seen.

    Other causes of horizontal image banding may be due to impermissible ground-loops where different floating power supplies used to power the computer, mount and camera etc., are commonly connected to ground by the camera’s USB cable.

  6. Just as a sanity check try the camera again on your Win 10 computer.

    A quick web search turned up that the WestBridge device name seen in Windows Device Manager is a Cyprus Semiconductors boot loader chip that is waiting to upload firmware, or the device firmware is already loaded but is corrupted.

    If the camera still works ok on your old Win 10 computer you’ll know that the camera is ok and problem is with the new Win 11 computer, OS or the QHY driver package.

    Win 11 no longer includes Visual C/C++ shared runtime libraries and assumes the device driver will install them if required but not all do so, or install versions that are incompatible with Win 11, but because the libraries are shared you only need to add one device driver that does have the correct C/C++ libraries for all drivers that need them to run properly, which is sometimes why a device works ok on one machine and not another that is in all other respects identical.

    It won’t do any harm to download and install the two Microsoft Visual Studio C/C++ Redistributable packages linked below, you need to install both the x86 and x64 packages on a 64Bit Windows OS. The packages are just regular installers and you only need to reboot after running the installers for the libraries to be made available to any driver that requires them.

    Probably won’t be the answer but it’s another possible crossed off the list.

    https://aka.ms/vs/17/release/vc_redist.x64.exe
     

    https://aka.ms/vs/17/release/vc_redist.x86.exe
     

     

  7. For a correctly installed QHY268M the camera appears in Device Manger, tree view, under a branch called "AstroImaging Equipment" > QHY5IIISeries_IO and the driver file is located at C:\Windows\System32\drivers\QHY5IIISeries_IO.sys, provider, Cypress Semiconductor.

    Driver version on my Win 11 system is 23.1.12.0 dated 12/01/2023.

    I don't see a "WestBridge" device listed anywhere on my system.

    Under the Events header for the driver and looking at Console I see that the initial camera config at connection point was carried out by a driver named "oem65.inf" which appears to be loaded under windows with a name QHYCCD_2ND.NTamd64, but I don't have a location for that.

    I assume that when you unplug the camera and refresh the Device Manager view then the WestBridge device disappears? if if doesn't then the WestBridge device is not the QHY camera, it is something else on your system.

    HTH

  8. Hi Olly.

    I’ve not read this book yet although it looks interesting.

    A quick look at the Kindle preview for “Vera Rubin, A life” shows several monochrome plates and there could be some colour plates in there too, neither of which are displayed on a Kindle Reader’s monochrome-only screen at a high dynamic range, which the Kindle Reader’s “e-Ink” screen was never designed for anyway, even though the Kindle book’s source file does contain the graphics in a high quality format.

    I’ll mention this in case you don’t already know. You can install the free Kindle App’ on a tablet, laptop, PC, or Mac and login to your Kindle account from any of those device and download/share your Kindle library across all those devices simultaneously, for no extra cost, which is how I view my own Kindle library of technical engineering and scientific literature.

    For my engineering and science Kindle library books, I’m often using those as a reference source while working in a different application on one of the Mac’s and the Kindle App’ would be open on the same device simultaneously, where the Kindle book’s graphics content can be viewed in full colour and at high quality.

    I often begin reading a general interest book with a graphics content on the Kindle Reader and concentrate on the text, if I need to look at the plates in greater detail then I can return to those later in the Kindle App’ on my iPad, MacBook or MacStudio.

    William.

    • Like 1
  9. Hi David.

    I dug out my old observatory computer and looked at the SX Lodestar settings and post the screenshots below, I no longer use the Lodestar for guiding since upgrading the mount.

    The first screen grab is the one that *might* help assuming you're running the latest SX ASCOM driver, and should be applicable for your setup with PHD. It is reachable from the "Advanced" button of the ASCOM driver setup.

    The second screen grab is not going to help with PHD but I'll post that here anyway for users of MaxIm DL and the specific SX Universal camera plug-in for MaxIm.

    HTH

    William.

    SX Lodestar interlace settings using ASCOM driver.

    image.png.47e4dcaad840c65c10f6a407ed8bab6f.png

     

    SX Lodestar settings for MaxIm DL specific plug-in.

    image.png.720f77cfb626d76aa8f240404abd7527.png

  10. That looks like the “Venetian Blind” effect that is/was a feature of the sensor used in the Lodestar.

    The Lodestar uses (I think) an interlaced readout CCD, designed for video cameras, each image frame is read twice, the even numbered rows sequentially first and the odd numbered rows sequentially after.

    The acquisition software has to be able to reassemble the two frames in the right order otherwise it splits the frame vertically as though looking through a Venetian blind, one row has image data that captures a star but the next row down is blank and that makes it appear as though you have multiple stars closely split.

    I’m afraid I have no experience of using PHD for guiding with a Lodestar but when using MaxIm DL there was a bunch of configuration options in the camera setup amongst which were “Swap odd and even rows” and “Remove Venetian Blind effect”, which used together assembles the image odd/even rows in the correct order and corrects for any brightness variation between the two rows due to the timing difference when reading all the even rows first and then going back to read the odd rows in a camera that has no shutter and continues to collect photons in the odd rows even while the even rows are being read out.

    In PHD look for a camera configuration setting that might be called “de-interlace”, or “progressive readout” or similar, as that will be needed when reading out an interlaced CCD camera.

    Sorry I couldn’t give you a PHD specific solution.

    William.

  11. 9 hours ago, Xiga said:

    I've double-checked my power connections and am happy that they are all sound (i use AC power) but i guess something must have come loose somewhere (although i can't figure out how!). I'll just keep a closer eye on the subs over the next few sessions to see if everything is ok. If not, i'll be back on here no doubt!

    Ciarán.

    Couple of things to check.

    The camera power input socket on the QHY268M/C is 5.5/2.1mm, if you are using an extension cable on the QHY power lead make sure it really is 5.5/2.1mm female to 5.5/2.1mm male and not 5.5/2.5mm female to 5.5/2.5mm male, or some other combination. I've had a few power extension cables, even from reputable suppliers, that were marked 5.5/2.1mm but were in fact 5.5/2.5mm and they would cause intermittent power supply dropout or low voltage issues with the TEC.

    According to the QHY manual for the 268M/C there is a UVLO protection device inside the camera that if triggered by low input voltage (below 11v DC at the camera) then the TEC is set to a low power mode which is not reset by power on-off, the low power mode (max 70% TEC power) is permanent until you connect to the camera using the QHY EZCAP-_QT software where you can reset the TEC protection mode back to normal (off).

    There have been a couple of web reports of the heatsink fan inside the QHY268M/C failing, either through electromechanical faults or insect invasion clogging the fan and stopping it spinning, with the camera cooling running shine a light inside the vent holes at the rear of the camera so that you can see the fan (deep inside the body) and check that the fan is spinning.

    Lastly, if you watch the subs arriving and they appear noisy look at the reported TEC cooler power in your capture program. Most capture programs will indicate the TEC cooler power and if you increase or decrease the requested TEC temperature you should see the power indication ramp up or down. If you change the temperature requested and the indicated power does not change then that is another clue. The QHY268M/C is not the fastest to respond to changes in requested TEC temperature, you have to wait a minute or two before any change you make to the requested temperature is reflected in the reported TEC power level.

    HTH

    William.

  12. Only looked at a random selection of the images and my impression of the noise pattern is that despite the FITS header reporting the sensor temperature is -4.9c the cooling is not actually running.

    I suspect that cooling control has "locked up" due to a software or hardware glitch and the -4.9c reported in the FITS header is bogus, the noise pattern has "structure" that appears very similar to what you see when the sensor is uncooled and imaging at ambient temperatures.

    The sensor temperature from my own QHY268M recorded in the FITS headers varies by ~ +/- 0.2c during a series but the temperature recorded in a random selection of yours all show the same -4.9c, which doesn't quite fit with my experience.

    Possibly the cooling shut down for some reason (low supply voltage protection?) but the camera firmware/driver never reported that back to the acquisition/capture program which continued to report and record the last good temperature reading it received?

    There could be a multitude of other causes, the above is just a guess based on the appearance of the images and quick evaluation of a random selection of the images using the Image Statistics module in PI, which showed that the minimum pixel value in each frame was increasing with each successive image, which you would expect as dark current increases with a warming sensor.

    Interesting problem....

    • Like 2
  13. 2 hours ago, DeCosta said:

    The rig is covered with a big weatherproof cover during the day so the camera is getting quite warm.  Then late afternoon I cool to -15° and remove the cover .  So the camera is going from very warm to very cold in about 20 mins.  Looks like I'll be taking the camera back to the UK to try and change the desiccant.

    Keith.

    If you are still in Fuerteventura you can recharge the existing desiccant tablets locally, no need to replace the existing desiccant tablets other than for speed of replacement, or the tablets are worn-out etc.

    This ZWO document details the steps required to recharge the desiccant tablets using a microwave oven:

    https://astronomy-imaging-camera.com/manuals/How_to_clean_ASI_camera_and_redry_the_desiccants_EN_V1.2.pdf

    All cooled cameras are subject to "breathing" pressure. As the sensor is cooled the air pressure inside the sensor chamber drops below external air pressure and damp air is drawn inside the chamber through any tiny gaps in the sensor chamber seals. When the camera warms again at the end of the session the air pressure inside the chamber increases above external air pressure and air is forced out of the chamber. The long-established camera manufacturers have designed their camera bodies with better specified sealing systems that are more able to withstand the changes in pressure (but at a higher cost per camera).

    You can reduce the air pressure differential by not cooling so low, the difference in image noise between -15c and -5c is so little with these new CMOS sensors that there is no real need to cool down to -15c and if you cool to only -5c the chamber air pressure difference is reduced. Also, the 2-stage TEC cooling is very efficient with these cameras so that you can begin cooling much closer to the time you intend to begin Imaging when the outside air temperature is lower and the pressure differential across the sensor chamber seals is reduced.

    HTH

    William.

  14. Looks a bit like condensation/ice on the sensor, has the camera desiccant been recharged recently?

    Load the images in PixInsight’s Blink module and step through the series in acquisition time sequence, if this is condensation or ice on the sensor you will see the artefacts initially appear very small and gradually increase in size as the camera series progresses over time.

     Noise would be randomly distributed across successive frames but ice artefacts will appear to be static and not shift position as you view the images in time sequence. Condensation droplets will appear static initially until they grow large enough to roll across the sensor under gravity.

  15. Hi Paul.

    On published documentation (that I can find):

    The 11" HD has a coma corrector built in to the baffle tubes and Celestron's white paper for the Edge HD series shows that the distance from the top surface of the visual back thread to the sensor should be 146.05 +/- 0.5mm.

    https://celestron-site-support-files.s3.amazonaws.com/support_files/91030-XLT_91040-XLT_91050-XLT_91060-XLT_EdgeHD_OpticalTube_Manual_English_web.pdf

    Here is an extract from the paper linked above showing the BF distance for the Edge HD series with the 11" HD highlighted:

    image.thumb.jpeg.c0389ac85df29d831b65e54cfd2f4b90.jpeg

     

    A "rough" back focus (BF) calculation for your current set up gives:

    2" Essato BF= 67mm 

    PL3600212 large SC adaptor BF = 2mm

    PL3600218 M56 (Essato) to T2 (camera) with stop-ring BF (minimum) = 4mm.

    Atik Horizon sensor to T2 distance BF = 13mm

    Total BF used = 86mm

    Additional BF required with Essato at minimum (all way in) position 146.05mm - 86 = 60.05mm

    But...

    You would not have the Essato set to the minimum (all way in) position when calculating the BF distance, it should be approx half way, plus, the PL3600218 M56 (Essato) to T2 (camera) with stop-ring adaptor would also not be at the minimum position otherwise you could not use it to rotate/orientate the camera.

    PrimaLuceLab don't say the maximum length that the PL3600218 M56/T2 with stop-ring adaptor will adjust to and from memory, I think it is about 10mm, but the thread pitch for T2 is 0.75mm, so, recalculating the distances with the Essato at 50% extension and the M56/T2 stop-ring adaptor at 1.5 turns out to give you one full turn of the camera plus a little spare gives:

    2" Essato at 50% of full range (67mm minimum BF +  1/2 range extension of 7.5mm) = 74.5mm.

    PL3600212 large SC adaptor BF = 2mm.

    PL3600218 M56 (Essato) to T2 (camera) with stop-ring, 4mm minimum BF + 1.5 turns out (1.5 turns x 0.75mm = 1.125 mm) BF = 5.125 mm.

    Atik Horizon sensor to T2 distance BF = 13mm

    Total BF used = 94.625mm

    Additional BF extension required = 146.05mm - 94.625 = 51.425mm (see diagram below). 

    Going by the pictures you posted it appears you have too much additional BF distance added.

    If you adjust the Essato position to 50% extension and the PL3600218 M56/T2 adapter to 1.5 turns out then the nominal T2 distance spacer required between the camera and the PL3600218 M56/T2 with stop ring adaptor is 51.425mm.

    Given that the range of focus travel for the 2" Essato is 15mm and the nominal position of the focuser for calculating the BF is at 50% extension then you have a small tolerance on the length of extension tube you can use and still be able to adjust focus using the Essato and while still being within the Celestron specified BF for the built in coma corrector of 146.05 +/0.5mm.

    This allows you a reasonable distance of 51.425mm +/- 6mm for your additional T2 spacer (allowing 1.5mm tolerance either side of fully out or fully in of the Essato focuser).

    A T2 spacer of 45.425mm min length to 57.425mm max length between the PL3600218 M56/T2 adapter and the ATIK Horizon is required according to my rough calculation.

    You do have a little more tolerance on the minimum length of T2 BF spacer that you use by extending the PL3600218 M56/T2 stop ring adaptor by a few turns, but there is no safety stop on that M56/T2 adapter and if not careful you can unscrew that fully and drop the camera.

    The above calculations are just quickly done and will require you to double-check by referring back to the manufacturers data sheets and manuals.

    IMO the Edge HD is a difficult OTA to fit an external focuser to because of the BF requirements dictated by the internal coma corrector fitted inside the baffle tube. To achieve the best performance the sensor plane must be at the Celestron specified distance of 146.05mm +/-0.5mm, from the rear surface of the visual back and that means that using an external focuser to move the camera you are moving away from the specified BF distance and image quality will be degraded.

    I have no idea how much image performance is degraded once you move away from that 146.05mm distance for the sensor,  perhaps somebody who has an Edge HD 11" will add to the discussion but for practical purposes I think you will need to set up the camera sensor to be as close as possible to that 146.05mm distance with the Essato at 50% extension, then use the OTA main mirror focuser to bring the image to focus, then, only adjust the Essato in/out focus by a fraction of a mm either way to fine-tune the focus bearing in mind that doing so moves the camera sensor away from the ideal BF distance of 146.05mm as stipulated in the Celestron documents.

    image.thumb.jpeg.9f57fb0087e89f8acdcc8174e28cf575.jpeg

    HTH

    William.

     

    • Like 1
  16. On 28/01/2024 at 08:57, Ouroboros said:

    1: After doing a crop we still need to run Image Solver before using a process like SPCC that needs the WCS*?

    2: The WCS data is stored somewhere in the XISF file.  Can I see that WCS info somehow?

     

     

    Hi @Ouroboros

    Re Q1: The answer to that is yes, you do need to run the Image Solver script again if you crop a previously solved image and then run another process after cropping that requires WCS coordinates.

    For example, if you run the Image Solver script on an image and then run Dynamic Crop afterwards you will see a pop-up message "Warning: DynamicCrop: Existing astrometric solution will be deleted", telling you that the stored image coordinates will be deleted if you proceed to crop the image, if you accept that warning, crop the image and then try to run SPCC it will fail to start with a message in the Process Console "Error: The image has no valid astrometric solution: <imagename>".

    Re Q2: There is no user tool or third-party script that I could find that allows you to explore the meta-data stored within a XISF file, maybe there is something in the developers toolkit but I have not explored that part of PI for a while since platform development is now too rapid for me to keep up with and I tend to use other applications for science imaging in preference to PI.

    The only option that I am aware of if you want to read the WCS coordinates in PI is to run the Image Solver script and then take a desktop snap-shot, or physically note the astrometric solution that is displayed in the Process Console when the Image Solver script completes.

    HTH

    • Thanks 1
  17. 11 hours ago, Ouroboros said:

    There's a weird  little problem I have noticed using Image Solver script with my new MacBook. 

    Image Solver chunks on through and produces a solution in the process console.

    However, what it is not doing is writing the solution into the FiTS header of the file.  

    Weirder still is that SPCC seems quite happy to provide a solution - graphs and everything.  

    How is this possible? It shouldn't work should it?

    From PixInsight build 1.8.9-2, released August 14 2023, PI no longer calculates a WCS solution when plate solving but uses its own spline-based XISF compatible solution instead, which is why PI internal processes that require a plate solved solution work as expected but when you export a solved image in FITS format there is no WCS solution in the FITS header since PI no longer uses the WCS standard for any of its processes.

    Unfortunately this change is another result of PixInsight’s well publicised aim of divergence away from supporting the FITS standard for anything other than very basic import/export of FITS image data and no longer supporting writing extended meta data to FITS files.

    For full details see the release notes for build 1.8.9-2, section “New Astrometric Solutions - Image Solver Script version 6.0”

    https://pixinsight.com/forum/index.php?threads/pixinsight-1-8-9-2-released.21228/

    For those of us still needing to be able to interact with other astronomy applications outside of the PixInsight environment we need to process the output FITS file from PixInsight through a different vendor application if a WCS solution is required in the FITS header although there is some validity in the argument that since you can’t be certain that the WCS solution written in any FITS image received from a third party is 100% reliable we should always ignore that and re-calculate a WCS solution anyway.

  18. Hi Gordon.

    Sounds as though you are sorted now but if you need extra rods, couplers, tapes (or other electrical parts) etc then try this French supplier:

    https://www.123elec.com/gamme-materiel-electrique/mise-a-la-terre.html
     

    You can search the web using the term “Piquet de terre cuivré” (Copper earth stake) which should find other France based suppliers.

    Never thought of using a SDS hammer drill to push the rods in myself, being rather old school I used a 14lb sledge and a steel bolt screwed into the rod coupler threads as a load spreader.
    The terrain at my UK observatory was stoney river delta and it took a good hour of swinging the sledge to drive 2m rods into the ground in 1m sections, with frequent pauses to re-tighten the coupling between the first and second rod sections as they tend to unscrew themselves under the shock of being hammered into the ground.

    William.

    • Like 1
  19. 25 minutes ago, aleixandrus said:

    Could you please share some tips to check for flat linearity? I stick with the same parameters as my usual light subs (gain 111, offset 8, temperature -5º) while keeping the histogram close to recommended ADU value by ZWO (adjusting brightness for 4-10s exposures depending on the filter).

    The camera parameters for flats should be just the same as for lights, gain, offset, and temperature, while target ADU for the flats should be the same for sky-flats and panel flats so that you can directly compare the two, although you can't adjust the sky brightness when taking sky-flats, you can only adjust the time (or add neutral density absorbers to the beam path).

    If you open a sky flat, un-stretched, but bias calibrated, and use the cursor readout mode of your chosen image processing application to read out the ADU value at selected points across the image, say the four corners (but inside any cut-off caused by undersized filters etc) and one point in the centre then you can calculate the approximate gradient across the image in percentage terms.

    Carry out the same procedure for your LED panel and the sky-flat.

    If the measured ADU values for the sky flat show an even illumination across the frame but the panel flat shows a distinct gradient in ADU terms then you'll know that the panel is to blame but as mentioned above, if you rotate and move the panel between flat subs then the unevenness in illumination will be averaged out in the stacked master.

    Depending on your choice of post-processing software you may already have the tools included to allow you to directly evaluate a flat frame without having to manually measure the ADU at specific points across sample panel flats and sky flats.

    Below is an example flat frame (un-stretched, from a 100mm f5/6 refractor equipped with a rotor, image distributor, photometer and spectrometer), and its corresponding flat profile, as measured in PixInsight, showing a collimation issue where the heavy (~5Kg) image distributor and spectrometer/photometer mounted on this system is pulling the rotator out of alignment. The important thing to note is that the flat image shows a variation of ~8.82% in ADU from the central beam out to the edges and each contour line represents 0.5% of the total ADU range in this image, but overall there is minimum non-linearity across the whole frame. The light source for this flat was an electroluminescent cover/calibrator panel, not LED. 

    Raw flat, un-stretched:

    image.jpeg.864bd05a0eba3c6d910150a62f222028.jpeg

    Flat profile (measured in PixInsight):

    image.jpeg.60fe2cc971e17064e5133ee7dcc763a2.jpeg

  20. 1 hour ago, Elp said:

    What's the issue exactly? You shouldn't be moving the light source when taking flats they all have to be taken under exactly the same condition as each other.

    This is not correct, it would only apply if the light source is perfectly homogenous across the full width of the illuminated surface, which the great majority of LED panels used in amateur class astrophotography are not.

    The recommended method for creating a master flat with an uneven illumination source is to move/rotate the panel randomly between each sub exposure, or every x number of subs, so that when combined small differences in single subs due to uneven illumination are averaged out in the master flat.

    One issue that crops up with old TFT tablets used as a flats source is that the output light is strongly polarised, which can in itself cause gradients in flats and random movement/rotation of the tablet between flats subs is therefore necessary.

    In Aleixandrus case he states that he sees the same gradient in flats created with his new led panel, and an old tablet, but we don't know if he is calibrating the flats before examining them for linearity, which is important since any fixed bias gradient in the sensor will show in the flats when stretched.

    The gold standard for resolving flats issues is to compare artificial panel flats to pre-dawn or post-sunset sky flats taken with a stationary mount (tracking switched off) pointing approximately 30 degrees above the horizon of the anti-solar point, and no other diffusers in the path. If the calibrated sky flats also show the same gradient then you can rule out the panel as being entirely to blame.

    • Like 1
  21. On 25/11/2023 at 21:27, Tomservo said:

     

    I had it very roughly focussed on the moon by holding the Eye piece out from the mount by about 10mm.

    13 hours ago, Tomservo said:

     

    Then Moon was almost in focus but then its a lot closer and bigger than the stars. I'm beginning to get concerns about the mirror distances now but I don't know enough about that aspect yet,

    Having to hold the eyepiece out from the mount, as you described, is expected.
    To use the telescope visually you would normally attach a diagonal holder and insert the eyepiece into that, which would bring the eyepiece out around another 20mm, or so, and you would have to rack the focuser inwards a little to reach the focal plane from there.

    The distance you needed to pull the eyepiece out of the focuser to reach the focal plane tells you where the camera sensor needs to be as both the eyepiece and camera will reach focus at the same distance and when setting up the back-focus spacers for an imaging setup a quick method to find the additional spacers needed is to rack the focuser to the half-out position and starting with the eyepiece fully inserted in the eyepiece holder you gradually slide the eyepiece out, into free air, until you reach visual focus and (easiest with the help of a second person) measure the distance between the field stop of the eyepiece and the back of the eyepiece holder on the telescope and that distance is (roughly) the amount of additional spacers you need to add so that the camera reaches focus approximately in the middle of the focuser’s travel range.

    Picking up on your comment re’ distance to the moon being closer to the stars, so far as focussing a telescope is concerned they are both at infinity and will both reach focus at the same point.

    As mentioned in an earlier reply, with the RC telescope design the distance of the focal plane from the back of the telescope body is greatly dependent on the distance between the primary and secondary mirrors and a very small change in primary-secondary distance will have a big effect on the distance of the focal plane from the back of the OTA, but, although the distance of your particular telescope appears excessive there could be some variation in manufacturing tolerances of the mirrors that have resulted in an unusually long focal plane distance for your particular instrument.

    On the other hand it could be that your instrument left the factory assembly line having been mis-collimated, or had a particularly rough journey, and that has pushed the collimation and back-focus out of tolerance.
    Whatever the reason, it is quite easy to put right with no specialised tools needed although it’s rather disappointing to hear that your supplier has provided no helpful advice or pathway to resolve the problem.

    I don’t know your particular model telescope but several of the bigger RC’s back-focus standard spacers (of a few years ago) were calculated to allow for the inclusion of an OAG, filter wheel and flattener as well as the camera to reach prime focus. When I briefly owned an RC8 around twelve-fifteen years ago I needed an extra 25mm spacer in addition to the three spacers supplied with the OTA, and that was using a camera, OAG, filter-wheel and a flattener (although my camera had an integrated filter-wheel and OAG and had a smaller back-focus requirement than if all those components were individual elements bolted together).

    A final tip, many beginners to imaging struggle with focussing a camera because they forget that as the camera sensor gets closer to the focal plane the photons from the object being focused, Moon, stars, whatever, are falling on fewer and fewer pixels, which saturates them, and this is particularly noticeable with big bright objects such as the Moon and planets. As you move the focuser position and the focussed object appears to shrink on-screen and grow brighter, you must also reduce the camera exposure time to prevent the camera pixels saturating, otherwise you’ll never be able to tell if you have really reached focus or not.


    When the camera exposure time is too long, or the camera gain set too high and the camera pixels are saturated you could move the camera all the way from an intra-focal position, through prime focus and out to extra-focal position and never see a significant change on the monitor. You have to continually reduce the exposure time, and if necessary the camera gain, as you bring the camera into focus and try and keep the image brightness constant.
     

    You will find that when visually focussing a camera the image has to appear quite dim on the monitor/screen to have any chance of detecting the prime focus position, which is not easy to judge if the monitor has other bright graphic elements in the same field of view because your eyes naturally adjust to those bright objects and the tendency is to try keep the object you are focussing on at roughly the same brightness as those other elements, and that will almost certainly lead to pixel saturation and you won’t be able to tell when the object is really in focus.

    It’s a bit of a juggling act at first, adjust focus to reduce the apparent size of the object on-screen while at the same time reducing the exposure time, or camera gain setting, to keep the object brightness under control so that the camera pixels are not saturated.

    Once you have the manual focus technique mastered, and the prime-focus distance accurately determined, then you’ll be able to use autofocus instead of manually focussing, which is a whole lot easier, but it is important to master manual focus techniques first to help understand how things should work so you’ll know how to resolve any issues that may arise with autofocus.

    HTH.

    William.

  22. 2 hours ago, Elp said:

    If you haven't overwritten the data on the SD card with new data you can normally recover it with Recuva.

    Another recovery tip...

    If you haven't rebooted the host computer yet the original files may still be in the Clipboard, press CTRL + V keys to view the clipboard (or Windows Key + V to see clipboard history in Windows 10/11 - note: only if Clipboard History is enabled in System settings - Clipboard, this function is now set to default to disabled for security reasons in Win 10/11).

    If the original files are there you can rewrite them to another location, or restore them to the original location. If the Clipboard is empty the original files may still be recoverable from the hard drive, provided that the space they were occupying has not been overwritten.

    I've not used Recuva so I can't comment on whether that can recover deleted files from the OS hard-drive, but there's several other apps that can, some free, some paid for, and the paid-for apps often come with a limited time free-trial.

    Lastly, some post-processing apps don't care about certain types of FITS file corruption so it might be worth seeing if you can open the file(s) in something else and then re-save with a new name to a different location.

    If you post one of the faulty FITS files here, or on a cloud drive somewhere and provide a public share link, maybe other forum users will test it to see if their post-processing app can open the file.

    William.

  23. That you have both corrupted files written to the SD card and the file path to the SD card can’t be permanently set in APT config suggests that there is possibly something wrong with the card or its formatting.

    A first step would be to upload a specific file from the capture PC to the NASA on-line fits file checker site >link< and verify that the source file is ok, then repeat with the same file uploaded from the SD card.

    If the source file from the mini PC passes the verifier test but the same file from the SD card fails then you’ll know that the problem lies with the SD card, or the way that files are being transferred from the mini PC to the SD card.

    If the SD card does appear suspect try re-formatting the SD card on different PC than you used last time and remember to eject the card in the OS before pulling the card out of the slot, which is usually the reason that SD cards become corrupted.

    (As a Mac user myself I have to use exFat format on SD cards, or disks, when transferring files from a Windows PC to a Mac as Mac’s don’t support the NTFS format.)

    Lastly, if both the source file on the mini PC and the SD card fail NASA’s on-line fits tester, and depending on the type of corruption that the source file has experienced, then you might be able to repair the source file via scripting, as in this specific example from the PixInsight forum >Link<.

    HTH.

    William.

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.