Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

Oddsocks

Members
  • Posts

    1,239
  • Joined

  • Last visited

Everything posted by Oddsocks

  1. Ciarán. Couple of things to check. The camera power input socket on the QHY268M/C is 5.5/2.1mm, if you are using an extension cable on the QHY power lead make sure it really is 5.5/2.1mm female to 5.5/2.1mm male and not 5.5/2.5mm female to 5.5/2.5mm male, or some other combination. I've had a few power extension cables, even from reputable suppliers, that were marked 5.5/2.1mm but were in fact 5.5/2.5mm and they would cause intermittent power supply dropout or low voltage issues with the TEC. According to the QHY manual for the 268M/C there is a UVLO protection device inside the camera that if triggered by low input voltage (below 11v DC at the camera) then the TEC is set to a low power mode which is not reset by power on-off, the low power mode (max 70% TEC power) is permanent until you connect to the camera using the QHY EZCAP-_QT software where you can reset the TEC protection mode back to normal (off). There have been a couple of web reports of the heatsink fan inside the QHY268M/C failing, either through electromechanical faults or insect invasion clogging the fan and stopping it spinning, with the camera cooling running shine a light inside the vent holes at the rear of the camera so that you can see the fan (deep inside the body) and check that the fan is spinning. Lastly, if you watch the subs arriving and they appear noisy look at the reported TEC cooler power in your capture program. Most capture programs will indicate the TEC cooler power and if you increase or decrease the requested TEC temperature you should see the power indication ramp up or down. If you change the temperature requested and the indicated power does not change then that is another clue. The QHY268M/C is not the fastest to respond to changes in requested TEC temperature, you have to wait a minute or two before any change you make to the requested temperature is reflected in the reported TEC power level. HTH William.
  2. Only looked at a random selection of the images and my impression of the noise pattern is that despite the FITS header reporting the sensor temperature is -4.9c the cooling is not actually running. I suspect that cooling control has "locked up" due to a software or hardware glitch and the -4.9c reported in the FITS header is bogus, the noise pattern has "structure" that appears very similar to what you see when the sensor is uncooled and imaging at ambient temperatures. The sensor temperature from my own QHY268M recorded in the FITS headers varies by ~ +/- 0.2c during a series but the temperature recorded in a random selection of yours all show the same -4.9c, which doesn't quite fit with my experience. Possibly the cooling shut down for some reason (low supply voltage protection?) but the camera firmware/driver never reported that back to the acquisition/capture program which continued to report and record the last good temperature reading it received? There could be a multitude of other causes, the above is just a guess based on the appearance of the images and quick evaluation of a random selection of the images using the Image Statistics module in PI, which showed that the minimum pixel value in each frame was increasing with each successive image, which you would expect as dark current increases with a warming sensor. Interesting problem....
  3. Keith. If you are still in Fuerteventura you can recharge the existing desiccant tablets locally, no need to replace the existing desiccant tablets other than for speed of replacement, or the tablets are worn-out etc. This ZWO document details the steps required to recharge the desiccant tablets using a microwave oven: https://astronomy-imaging-camera.com/manuals/How_to_clean_ASI_camera_and_redry_the_desiccants_EN_V1.2.pdf All cooled cameras are subject to "breathing" pressure. As the sensor is cooled the air pressure inside the sensor chamber drops below external air pressure and damp air is drawn inside the chamber through any tiny gaps in the sensor chamber seals. When the camera warms again at the end of the session the air pressure inside the chamber increases above external air pressure and air is forced out of the chamber. The long-established camera manufacturers have designed their camera bodies with better specified sealing systems that are more able to withstand the changes in pressure (but at a higher cost per camera). You can reduce the air pressure differential by not cooling so low, the difference in image noise between -15c and -5c is so little with these new CMOS sensors that there is no real need to cool down to -15c and if you cool to only -5c the chamber air pressure difference is reduced. Also, the 2-stage TEC cooling is very efficient with these cameras so that you can begin cooling much closer to the time you intend to begin Imaging when the outside air temperature is lower and the pressure differential across the sensor chamber seals is reduced. HTH William.
  4. Looks a bit like condensation/ice on the sensor, has the camera desiccant been recharged recently? Load the images in PixInsight’s Blink module and step through the series in acquisition time sequence, if this is condensation or ice on the sensor you will see the artefacts initially appear very small and gradually increase in size as the camera series progresses over time. Noise would be randomly distributed across successive frames but ice artefacts will appear to be static and not shift position as you view the images in time sequence. Condensation droplets will appear static initially until they grow large enough to roll across the sensor under gravity.
  5. Hi Paul. On published documentation (that I can find): The 11" HD has a coma corrector built in to the baffle tubes and Celestron's white paper for the Edge HD series shows that the distance from the top surface of the visual back thread to the sensor should be 146.05 +/- 0.5mm. https://celestron-site-support-files.s3.amazonaws.com/support_files/91030-XLT_91040-XLT_91050-XLT_91060-XLT_EdgeHD_OpticalTube_Manual_English_web.pdf Here is an extract from the paper linked above showing the BF distance for the Edge HD series with the 11" HD highlighted: A "rough" back focus (BF) calculation for your current set up gives: 2" Essato BF= 67mm PL3600212 large SC adaptor BF = 2mm PL3600218 M56 (Essato) to T2 (camera) with stop-ring BF (minimum) = 4mm. Atik Horizon sensor to T2 distance BF = 13mm Total BF used = 86mm Additional BF required with Essato at minimum (all way in) position 146.05mm - 86 = 60.05mm But... You would not have the Essato set to the minimum (all way in) position when calculating the BF distance, it should be approx half way, plus, the PL3600218 M56 (Essato) to T2 (camera) with stop-ring adaptor would also not be at the minimum position otherwise you could not use it to rotate/orientate the camera. PrimaLuceLab don't say the maximum length that the PL3600218 M56/T2 with stop-ring adaptor will adjust to and from memory, I think it is about 10mm, but the thread pitch for T2 is 0.75mm, so, recalculating the distances with the Essato at 50% extension and the M56/T2 stop-ring adaptor at 1.5 turns out to give you one full turn of the camera plus a little spare gives: 2" Essato at 50% of full range (67mm minimum BF + 1/2 range extension of 7.5mm) = 74.5mm. PL3600212 large SC adaptor BF = 2mm. PL3600218 M56 (Essato) to T2 (camera) with stop-ring, 4mm minimum BF + 1.5 turns out (1.5 turns x 0.75mm = 1.125 mm) BF = 5.125 mm. Atik Horizon sensor to T2 distance BF = 13mm Total BF used = 94.625mm Additional BF extension required = 146.05mm - 94.625 = 51.425mm (see diagram below). Going by the pictures you posted it appears you have too much additional BF distance added. If you adjust the Essato position to 50% extension and the PL3600218 M56/T2 adapter to 1.5 turns out then the nominal T2 distance spacer required between the camera and the PL3600218 M56/T2 with stop ring adaptor is 51.425mm. Given that the range of focus travel for the 2" Essato is 15mm and the nominal position of the focuser for calculating the BF is at 50% extension then you have a small tolerance on the length of extension tube you can use and still be able to adjust focus using the Essato and while still being within the Celestron specified BF for the built in coma corrector of 146.05 +/0.5mm. This allows you a reasonable distance of 51.425mm +/- 6mm for your additional T2 spacer (allowing 1.5mm tolerance either side of fully out or fully in of the Essato focuser). A T2 spacer of 45.425mm min length to 57.425mm max length between the PL3600218 M56/T2 adapter and the ATIK Horizon is required according to my rough calculation. You do have a little more tolerance on the minimum length of T2 BF spacer that you use by extending the PL3600218 M56/T2 stop ring adaptor by a few turns, but there is no safety stop on that M56/T2 adapter and if not careful you can unscrew that fully and drop the camera. The above calculations are just quickly done and will require you to double-check by referring back to the manufacturers data sheets and manuals. IMO the Edge HD is a difficult OTA to fit an external focuser to because of the BF requirements dictated by the internal coma corrector fitted inside the baffle tube. To achieve the best performance the sensor plane must be at the Celestron specified distance of 146.05mm +/-0.5mm, from the rear surface of the visual back and that means that using an external focuser to move the camera you are moving away from the specified BF distance and image quality will be degraded. I have no idea how much image performance is degraded once you move away from that 146.05mm distance for the sensor, perhaps somebody who has an Edge HD 11" will add to the discussion but for practical purposes I think you will need to set up the camera sensor to be as close as possible to that 146.05mm distance with the Essato at 50% extension, then use the OTA main mirror focuser to bring the image to focus, then, only adjust the Essato in/out focus by a fraction of a mm either way to fine-tune the focus bearing in mind that doing so moves the camera sensor away from the ideal BF distance of 146.05mm as stipulated in the Celestron documents. HTH William.
  6. Hi @Ouroboros Re Q1: The answer to that is yes, you do need to run the Image Solver script again if you crop a previously solved image and then run another process after cropping that requires WCS coordinates. For example, if you run the Image Solver script on an image and then run Dynamic Crop afterwards you will see a pop-up message "Warning: DynamicCrop: Existing astrometric solution will be deleted", telling you that the stored image coordinates will be deleted if you proceed to crop the image, if you accept that warning, crop the image and then try to run SPCC it will fail to start with a message in the Process Console "Error: The image has no valid astrometric solution: <imagename>". Re Q2: There is no user tool or third-party script that I could find that allows you to explore the meta-data stored within a XISF file, maybe there is something in the developers toolkit but I have not explored that part of PI for a while since platform development is now too rapid for me to keep up with and I tend to use other applications for science imaging in preference to PI. The only option that I am aware of if you want to read the WCS coordinates in PI is to run the Image Solver script and then take a desktop snap-shot, or physically note the astrometric solution that is displayed in the Process Console when the Image Solver script completes. HTH
  7. From PixInsight build 1.8.9-2, released August 14 2023, PI no longer calculates a WCS solution when plate solving but uses its own spline-based XISF compatible solution instead, which is why PI internal processes that require a plate solved solution work as expected but when you export a solved image in FITS format there is no WCS solution in the FITS header since PI no longer uses the WCS standard for any of its processes. Unfortunately this change is another result of PixInsight’s well publicised aim of divergence away from supporting the FITS standard for anything other than very basic import/export of FITS image data and no longer supporting writing extended meta data to FITS files. For full details see the release notes for build 1.8.9-2, section “New Astrometric Solutions - Image Solver Script version 6.0” https://pixinsight.com/forum/index.php?threads/pixinsight-1-8-9-2-released.21228/ For those of us still needing to be able to interact with other astronomy applications outside of the PixInsight environment we need to process the output FITS file from PixInsight through a different vendor application if a WCS solution is required in the FITS header although there is some validity in the argument that since you can’t be certain that the WCS solution written in any FITS image received from a third party is 100% reliable we should always ignore that and re-calculate a WCS solution anyway.
  8. Hi Gordon. Sounds as though you are sorted now but if you need extra rods, couplers, tapes (or other electrical parts) etc then try this French supplier: https://www.123elec.com/gamme-materiel-electrique/mise-a-la-terre.html You can search the web using the term “Piquet de terre cuivré” (Copper earth stake) which should find other France based suppliers. Never thought of using a SDS hammer drill to push the rods in myself, being rather old school I used a 14lb sledge and a steel bolt screwed into the rod coupler threads as a load spreader. The terrain at my UK observatory was stoney river delta and it took a good hour of swinging the sledge to drive 2m rods into the ground in 1m sections, with frequent pauses to re-tighten the coupling between the first and second rod sections as they tend to unscrew themselves under the shock of being hammered into the ground. William.
  9. The camera parameters for flats should be just the same as for lights, gain, offset, and temperature, while target ADU for the flats should be the same for sky-flats and panel flats so that you can directly compare the two, although you can't adjust the sky brightness when taking sky-flats, you can only adjust the time (or add neutral density absorbers to the beam path). If you open a sky flat, un-stretched, but bias calibrated, and use the cursor readout mode of your chosen image processing application to read out the ADU value at selected points across the image, say the four corners (but inside any cut-off caused by undersized filters etc) and one point in the centre then you can calculate the approximate gradient across the image in percentage terms. Carry out the same procedure for your LED panel and the sky-flat. If the measured ADU values for the sky flat show an even illumination across the frame but the panel flat shows a distinct gradient in ADU terms then you'll know that the panel is to blame but as mentioned above, if you rotate and move the panel between flat subs then the unevenness in illumination will be averaged out in the stacked master. Depending on your choice of post-processing software you may already have the tools included to allow you to directly evaluate a flat frame without having to manually measure the ADU at specific points across sample panel flats and sky flats. Below is an example flat frame (un-stretched, from a 100mm f5/6 refractor equipped with a rotor, image distributor, photometer and spectrometer), and its corresponding flat profile, as measured in PixInsight, showing a collimation issue where the heavy (~5Kg) image distributor and spectrometer/photometer mounted on this system is pulling the rotator out of alignment. The important thing to note is that the flat image shows a variation of ~8.82% in ADU from the central beam out to the edges and each contour line represents 0.5% of the total ADU range in this image, but overall there is minimum non-linearity across the whole frame. The light source for this flat was an electroluminescent cover/calibrator panel, not LED. Raw flat, un-stretched: Flat profile (measured in PixInsight):
  10. This is not correct, it would only apply if the light source is perfectly homogenous across the full width of the illuminated surface, which the great majority of LED panels used in amateur class astrophotography are not. The recommended method for creating a master flat with an uneven illumination source is to move/rotate the panel randomly between each sub exposure, or every x number of subs, so that when combined small differences in single subs due to uneven illumination are averaged out in the master flat. One issue that crops up with old TFT tablets used as a flats source is that the output light is strongly polarised, which can in itself cause gradients in flats and random movement/rotation of the tablet between flats subs is therefore necessary. In Aleixandrus case he states that he sees the same gradient in flats created with his new led panel, and an old tablet, but we don't know if he is calibrating the flats before examining them for linearity, which is important since any fixed bias gradient in the sensor will show in the flats when stretched. The gold standard for resolving flats issues is to compare artificial panel flats to pre-dawn or post-sunset sky flats taken with a stationary mount (tracking switched off) pointing approximately 30 degrees above the horizon of the anti-solar point, and no other diffusers in the path. If the calibrated sky flats also show the same gradient then you can rule out the panel as being entirely to blame.
  11. Having to hold the eyepiece out from the mount, as you described, is expected. To use the telescope visually you would normally attach a diagonal holder and insert the eyepiece into that, which would bring the eyepiece out around another 20mm, or so, and you would have to rack the focuser inwards a little to reach the focal plane from there. The distance you needed to pull the eyepiece out of the focuser to reach the focal plane tells you where the camera sensor needs to be as both the eyepiece and camera will reach focus at the same distance and when setting up the back-focus spacers for an imaging setup a quick method to find the additional spacers needed is to rack the focuser to the half-out position and starting with the eyepiece fully inserted in the eyepiece holder you gradually slide the eyepiece out, into free air, until you reach visual focus and (easiest with the help of a second person) measure the distance between the field stop of the eyepiece and the back of the eyepiece holder on the telescope and that distance is (roughly) the amount of additional spacers you need to add so that the camera reaches focus approximately in the middle of the focuser’s travel range. Picking up on your comment re’ distance to the moon being closer to the stars, so far as focussing a telescope is concerned they are both at infinity and will both reach focus at the same point. As mentioned in an earlier reply, with the RC telescope design the distance of the focal plane from the back of the telescope body is greatly dependent on the distance between the primary and secondary mirrors and a very small change in primary-secondary distance will have a big effect on the distance of the focal plane from the back of the OTA, but, although the distance of your particular telescope appears excessive there could be some variation in manufacturing tolerances of the mirrors that have resulted in an unusually long focal plane distance for your particular instrument. On the other hand it could be that your instrument left the factory assembly line having been mis-collimated, or had a particularly rough journey, and that has pushed the collimation and back-focus out of tolerance. Whatever the reason, it is quite easy to put right with no specialised tools needed although it’s rather disappointing to hear that your supplier has provided no helpful advice or pathway to resolve the problem. I don’t know your particular model telescope but several of the bigger RC’s back-focus standard spacers (of a few years ago) were calculated to allow for the inclusion of an OAG, filter wheel and flattener as well as the camera to reach prime focus. When I briefly owned an RC8 around twelve-fifteen years ago I needed an extra 25mm spacer in addition to the three spacers supplied with the OTA, and that was using a camera, OAG, filter-wheel and a flattener (although my camera had an integrated filter-wheel and OAG and had a smaller back-focus requirement than if all those components were individual elements bolted together). A final tip, many beginners to imaging struggle with focussing a camera because they forget that as the camera sensor gets closer to the focal plane the photons from the object being focused, Moon, stars, whatever, are falling on fewer and fewer pixels, which saturates them, and this is particularly noticeable with big bright objects such as the Moon and planets. As you move the focuser position and the focussed object appears to shrink on-screen and grow brighter, you must also reduce the camera exposure time to prevent the camera pixels saturating, otherwise you’ll never be able to tell if you have really reached focus or not. When the camera exposure time is too long, or the camera gain set too high and the camera pixels are saturated you could move the camera all the way from an intra-focal position, through prime focus and out to extra-focal position and never see a significant change on the monitor. You have to continually reduce the exposure time, and if necessary the camera gain, as you bring the camera into focus and try and keep the image brightness constant. You will find that when visually focussing a camera the image has to appear quite dim on the monitor/screen to have any chance of detecting the prime focus position, which is not easy to judge if the monitor has other bright graphic elements in the same field of view because your eyes naturally adjust to those bright objects and the tendency is to try keep the object you are focussing on at roughly the same brightness as those other elements, and that will almost certainly lead to pixel saturation and you won’t be able to tell when the object is really in focus. It’s a bit of a juggling act at first, adjust focus to reduce the apparent size of the object on-screen while at the same time reducing the exposure time, or camera gain setting, to keep the object brightness under control so that the camera pixels are not saturated. Once you have the manual focus technique mastered, and the prime-focus distance accurately determined, then you’ll be able to use autofocus instead of manually focussing, which is a whole lot easier, but it is important to master manual focus techniques first to help understand how things should work so you’ll know how to resolve any issues that may arise with autofocus. HTH. William.
  12. Another recovery tip... If you haven't rebooted the host computer yet the original files may still be in the Clipboard, press CTRL + V keys to view the clipboard (or Windows Key + V to see clipboard history in Windows 10/11 - note: only if Clipboard History is enabled in System settings - Clipboard, this function is now set to default to disabled for security reasons in Win 10/11). If the original files are there you can rewrite them to another location, or restore them to the original location. If the Clipboard is empty the original files may still be recoverable from the hard drive, provided that the space they were occupying has not been overwritten. I've not used Recuva so I can't comment on whether that can recover deleted files from the OS hard-drive, but there's several other apps that can, some free, some paid for, and the paid-for apps often come with a limited time free-trial. Lastly, some post-processing apps don't care about certain types of FITS file corruption so it might be worth seeing if you can open the file(s) in something else and then re-save with a new name to a different location. If you post one of the faulty FITS files here, or on a cloud drive somewhere and provide a public share link, maybe other forum users will test it to see if their post-processing app can open the file. William.
  13. That you have both corrupted files written to the SD card and the file path to the SD card can’t be permanently set in APT config suggests that there is possibly something wrong with the card or its formatting. A first step would be to upload a specific file from the capture PC to the NASA on-line fits file checker site >link< and verify that the source file is ok, then repeat with the same file uploaded from the SD card. If the source file from the mini PC passes the verifier test but the same file from the SD card fails then you’ll know that the problem lies with the SD card, or the way that files are being transferred from the mini PC to the SD card. If the SD card does appear suspect try re-formatting the SD card on different PC than you used last time and remember to eject the card in the OS before pulling the card out of the slot, which is usually the reason that SD cards become corrupted. (As a Mac user myself I have to use exFat format on SD cards, or disks, when transferring files from a Windows PC to a Mac as Mac’s don’t support the NTFS format.) Lastly, if both the source file on the mini PC and the SD card fail NASA’s on-line fits tester, and depending on the type of corruption that the source file has experienced, then you might be able to repair the source file via scripting, as in this specific example from the PixInsight forum >Link<. HTH. William.
  14. I don’t have any first-hand experience of this particular model and maybe it’s normal, or just an artefact of the way the photo’s were taken, but I’d be a little suspicious of the (apparent) missing section of “O” ring/gasket seal surrounding the corrector plate on the right-hand side, from the letter “U” in the “Multi-Coated” text extending down almost to the bottom of the plate. There is a smaller gap in the “O” ring gasket on the left-hand side by the “mm” letters for the focal length specification, which may or may not be normal. Without being able to compare to an untouched model I’d be a little suspicious that the corrector plate had been removed for cleaning and the “O” ring/gasket had broken or crumbled away leaving some large gaps. Maybe someone here has owned one, or still has one, and can comment further?
  15. Hi Nicolàs. Here are five sample 0.5s darks from a QHY268M all at -15c. https://www.dropbox.com/scl/fo/ju1ai9wtjm0ljwmr8za2n/h?rlkey=shytpkndpaluyqik6rq9ztgyy&dl=0 These are nothing like your test images, the hot pixels in your images being mostly a uniform ~24,844 ADU is not normal, and this is not replicated in the images from my own camera. The file headers in my samples have all the information you need for comparison and the images were captured with MaxIm DL v6.40 with only the FITS headers being sanitised for personal identifiable information in PixInsight before saving and uploading to DropBox As I only have the 2GB basic DropBox allowance I'll only host these for seven days before deleting, I don't have the storage space to keep them longer. The five frames were taken with the following settings: 0.5s, Dark, (Covered OTA wrapped with darkroom blind material and closed Flip-Flat), ExFullWell_2CMS curve, Gain:26, Offset:16, -15c, Fast Readout mode. 0.5s, Dark, (Covered OTA wrapped with darkroom blind material and closed Flip-Flat), ExFullWell_2CMS curve, Gain:26, Offset:16, -15c, Normal Readout mode. 0.5s, Dark, (Covered OTA wrapped with darkroom blind material and closed Flip-Flat), ExFullWell curve, Gain:26, Offset:16, -15c, Fast Readout mode. 0.5s, Dark, (Covered OTA wrapped with darkroom blind material and closed Flip-Flat), ExFullWell curve, Gain:26, Offset:16, -15c, Normal Readout mode. And a comparison file with the settings I normally use: 0.5s, Dark, (Covered OTA wrapped with darkroom blind material and closed Flip-Flat), PhotoDSO_2CMS-0 curve, Gain:26, Offset:10, -15c, Normal Readout mode. Before condemning the camera try acquisition with a different capture program, just on the off-chance that the capture program you used to test the camera has not correctly set the requested capture curve and gain/offset values, also be sure that the QHY Drivers/capture software is up-to-date. HTH William.
  16. If I understand PI WBPP and DSS, the principle differences between them are that PI performs a deep analysis of each individual frame and gives it a score, or “weight” for several image parameters, as indicated by the PI tool's name Weighted Batch Pre Processing. That deep analysis is partly the reason for the higher computer processing overheads and longer time-to-complete when stacking subs in PI compared to DSS. Juan Conjero, author of PI, has stated many times that the single output combined image from a run of WBPP (and the previous non-weighted version of WBPP, BPP) is only a preliminary image and that by going back and recombining the calibrated .xisf subs with small changes to the combination parameters may yield superior results. Knowing which parameters to change is where the “learning curve” with PI starts to bite. If all your subs are of uniform high quality, with low noise levels, good star shapes etc., then I would not expect to see a huge difference between the preliminary output image after a first run through with WBPP in PI and a run through with DSS, but if the source subs were highly variable in quality, especially with regard to noise, then I would expect to see some more apparent variation between a DSS processed image and PI WBPP processed image, with the PI WBPP image having a lower background noise level in the dark, low-contrast areas of the image as the weighting process kicks in on a pixel-by-pixel basis rather than a whole-frame basis during the combination stage. lastly, with the weighting analysis scores that were calculated during the WBPP process being saved with the calibrated .xisf subs that are automatically output from a WBPP run those scores can then speed up processes used by other PI tools during post processing because the subs don’t need to be analysed a second time (if for example you are carrying out a second recombination of the calibrated .xisf subs in the Image Integration tool using modified integration parameters). Don’t know why, but writing this summary I can feel @ollypenrice peering over my shoulder, trying hard not to laugh…..🫢😅🫢. HTH. William.
  17. I don’t know this particular application but maybe its the Visual C/C++ runtimes that you need since these aren’t distributed with Windows 11 and rely on the application’s own installer to add the redistributable support libraries if necessary. Try downloading and installing manually both the x64 and x86 Visual C/C++ redistributable libraries linked below. You need both x64 and x86 packages installed on a Windows 11 64bit O.S. since these are application bit-version specific libraries rather than O.S. bit-version specific. After running both installers you need to reboot the computer to make them available to any application that requires them, then after the reboot try installing your application again. Both download links below are direct to Microsoft’s own servers but if you want to verify the source addresses and read more you will find that on >this< Microsoft webpage. https://aka.ms/vs/17/release/vc_redist.x64.exe https://aka.ms/vs/17/release/vc_redist.x86.exe HTH. William.
  18. The original QHY8 (square camera with unsealed sensor chamber and dedicated desiccant storage/drying case) had an unregulated two-stage TEC, on paper capable of -40 delta .C although I doubt many actually achieved that in the real world. Back in the early days of cooled astro CCD cameras, unregulated TEC cooling was not uncommon and there were two ways of dealing with the resulting changing dark current as the session progressed and the sensor reaches lower temperatures. Either, take light-dark, light-dark, light-dark repeating pairs during the session and calibrate each light with its matching dark frame before combining the calibrated frames, or, use a master bias frame and master dark frame, taken at the end of the session when the camera has reached minimum temperature, and use the Dark Frame Optimisation routine found in many post-processing applications to automatically vary the ratio of dark frame subtraction from the light frame according to a measurement of residual noise in each calibrated frame, and then combine the results. For your early QHY8 camera with an unregulated TEC the sensor should reach a reasonably stable temperature after around thirty minutes and cooling can begin at twilight while the OTA is still acclimatising, and for greatest efficiency take master dark frames and master bias frames at the session end and use Dark Frame Optimisation during calibration in preference to matching light-dark, light-dark, light-dark repeating pairs. On the camera hardware side, if your QHY8 camera has a heatsink fan (or fans) make sure it/they are still working, the electronics was rather crude on cameras of this era and a broken fan might not be monitored, which would result in the camera cooling much more slowly than normal and fail to reach maximum delta .C. Also, make sure that the heatsink fins are clean and not choked with dust, the end result is just the same as with a broken fan, poor cooling. TEC coolers do have a finite life and if a camera has a two-stage TEC it’s possible for one of the pair to fail, again, its likely that the electronics of this era would not detect a failed TEC and the result would be a greatly reduced delta .C capability, which would fit with your observation of a slowly improving image after many hours of use. Final hardware tip, the TEC cooler stack will be bonded together, and also bonded to the back of the sensor, with either a thermal transfer pad, thermal grease or thermal paste. The latter two will dry out over extended time and lead to reduced cooling efficiency. If your camera has been in use for many years and it uses thermal grease or compound it’s possible that the old thermal grease/compound needs cleaning away and renewing. If you feel that camera cooling performance has changed and is not as efficient as it once was then the above tips may point to a possible cause. Lastly, improving image quality as the session progresses might not all be attributed to a lower sensor temperature, target altitude (rising target), local light pollution levels (auto-dimming street-lights, neighbours turning out their house lights) and daytime atmospheric dust gradually falling back to the ground leaving a clearer sky can all contribute to improved image quality the longer that the session continues. William.
  19. I also looked at all your light frames and as the previous respondents have noted, most of the frames are unusable, I only found eighteen that were even close to being usable for integration. Of the best frames, the stars in nearly all those frames were slightly out-of-focus with a clear central black disk and outer bright ring, also, there was some hint of mis-collimation of the telescope. Of the full set of lights, approximately half appeared to have a low SNR, possibly taken under deteriorating sky conditions, hazy thin cloud perhaps, or the target becoming closer to the horizon? Some practical suggestions..... As you have a heavy camera attached to the back of the Alt/Az mounted telescope try to add sufficient counterweight to the front of the OTA so that the tube is balanced, that will help reduce tracking errors. You should find it possible to make your own counterbalance weights from scrap material for virtually no cost. While imaging, resist the urge to walk around close to the tripod. The standard Celestron tripod with the 8SE is a rather flimsy affair and will react to footfall close to the tripod unless on solid concrete or hard ground. If you can keep the telescopic legs of the tripod unextended, or as short as possible, and set up with the telescope as close to the ground as you can get then that will help with stability, and, if you can add some sandbags above the spreader plate for the tripod legs that will help dampen down any tripod vibration. If you haven't already a Bahtinov mask in your toolset to aid focusing you can make one for virtually nothing, details >Here< Finally, review the collimation of the telescope and make sure the telescope is collimated at the same OTA angle that will be used for the target, or at least, point the telescope to the zenith for the collimation adjustment so that any mirror flop that pushes the collimation out of adjustment with changing OTA angle will be roughly the same either side of the meridian. William.
  20. That should probably be ok, unfortunately Baader are rather cagey about what exactly goes into that product and they don’t publish a hazchem leaflet but they mention alcohol in their Q&A section so most likely its just a mix of Isopropyl Alcohol, deionised water and detergent, which is safe enough for cleaning lenses, mirrors and the AR cover-glass on most CCD and CMOS sensors. Sony don’t publish any useful information for consumers about how to clean their sensors and have always been rather secretive in this area however other manufacturers are more helpful and this OnSemi document gives very clear guidance on the types of chemicals that can be used and the recommended methods (see page 3/4). The most common mistake when cleaning a cooled camera sensor is to spray the cleaner fluid onto the cover glass and make it too wet, the cleaner fluid should be sprayed lightly onto the cleaning swab/spatula and if as you drag the swab/spatula across the surface of the cover glass it leaves a trail of droplets behind then you’ve used too much liquid. Not all types of contaminants can be removed with alcohol. Silicone oil that is typically used as a component of heat-sink compounds for the interface between the TEC cold-finger and the back of the sensor package is particularly difficult to clean away and if that gets on the cover-glass it requires chemical solvent cleaners that are both toxic and capable of damaging the AR coating on the cover-glass, as well as possibly dissolving the glue that bonds the cover-glass to the sensor package. If silicone oil ever does contaminate the cover-glass it’s best to have the camera professionally cleaned unless you have some experience of handling hazardous chemicals and can be sure which materials have been used in constructing the sensor package. The work you have carried out should have made some improvement and I can’t think of anything else practical to suggest beyond the contributions already made to this thread. Have you tried opening a case on the camera manufacturers support forum, or emailing the retailer you bought the camera from to see if they have any previous experience of this problem? I’ll keep watching your thread and reply back if I think of anything new but right now I’m out of fresh ideas. William.
  21. Hi Gordon. Good to hear from you, hope you have recovered well from your health scare... Your plan sounds good to me. Only suggestion I have to make is to use a can of compressed air to blow away the dust from the outside of the camera, particularly around the cooling fins and fan aperture, and then wipe over the outside of the camera with an anti-static cloth (type used for vinyl records, CD's etc) before you place it in the gloved bag. I do wonder if these SONY IMX sensors can be damaged by running at extremely low temperatures, the published SONY specification sheets available to the public are worthless in this respect as they have nothing to say about the acceptable operational temperature range. Here at home I built a camera service and purging station from a second-hand acrylic fish tank, £2.50 from one of the SCOPE charity shops, round holes cut in the sides and nitrile-rubber gauntlet length gloves clamped into the holes using clamping rings made on a 3D printer, a rubber gasket glued to the top of the tank and another flat sheet of clear acrylic used as a lid. The whole thing cost less than £25. "One day" I'll get around to fitting a Dyson type HEPA filter at one end of the tank and a suction fan, or vacuum cleaner hose adaptor at the other, at the moment I just vacuum clean the tank before I need to use it and get the camera inside quickly before more dust can blow inside. At the Portuguese remote observatory we bought a second-hand laboratory clean-box for this purpose, about the size of a baby incubator. With a couple of dozen cameras and users out there that box is used frequently but being so large we do get through a 130 litre full-size bottle of welding Argon quite quickly, and at a couple of hundred Euros per bottle that adds quite a bit to our shared operating costs. William.
  22. I mis-read your reply above and took that to mean you had only cleaned the camera window, not the sensor cover glass. The sensor cover glass is hermetically sealed during fabrication, you can't remove that and the sensor would be damaged if you tried. Just to clarify, did you wet clean or dry clean the sensor? A way to answer that definitively is to try the following method. Find a container just large enough to hold the camera upright and with about ten centimetres clearance above, such as a goldfish bowl, deep saucepan, cookie jar, etc. The container needs to be only just big enough for the task, not too much empty space around the camera. Cut three fingers of thin, but stiff, plastic, ca. 1cm x 5cm, from an old credit-card, or a supermarket plastic food tray and punch a hole close to one end in each of the pieces. Use a couple of fresh in-car desiccant pouches, such as linked below, as bean-bags to support the camera, facing upwards, in the container. https://www.amazon.co.uk/FiNeWaY-REUSABLE-DEHUMIDIFIER-MOISTURE-ABSORBER/dp/B077K8BJW4/ Add a few additional non-desiccant bean-bags, or similar, to support the camera firmly so that it can't tip over in the container. Install new/regenerated desiccant tablets in the camera but leave the camera face-plate slightly loose and insert the three plastic fingers spaced equally around the flange area to keep open a slight gap, the punched holes in the plastic fingers should be outermost, do not pinch the face-plate screws tightly, you need to be able to slide the plastic fingers out of the gap easily. Cover the container with clingfilm leaving a little slack, but otherwise sealed, and place the container/camera somewhere warm for 48hrs. *see optional step for Argon purging below* After 48hrs in a warm place carefully punch and push a thin-bladed screwdriver through the clingfilm cover over the container and using the holes you punched in the plastic fingers slide the fingers out of the flange area and then tighten down the flange while the clingfilm is still covering the container. Remove the camera from the container and after fully tightening the flange screws immediately test the camera. If the artefacts are still present after following the above steps then something else is going on with your camera, it can't be due to moisture. If the artefacts were not present after the above steps but return after a few days/weeks then there is a leaky seal somewhere in the sensor chamber. *Optional step: Argon purge* If you want to Argon purge the camera you can do so by lifting a small piece of one corner of the cling-film cover slightly and inserting the thin feed pipe from a MIG/TIG Argon regulator and a disposable bottle of welding Argon, pushed down to the bottom of the container, and gently trickle in a stream of Argon at a very low rate for ten minutes. The heavier-than-air Argon will slowly displace the damp air in the container through the lifted gap in clingfilm cover, leaving the container filled with almost pure and dry Argon. Then extract the Argon feed pipe, seal down the clingfilm and leave the container somewhere warm for twenty-four hours to allow any damp air trapped in the sensor cavity to be replaced with the heavier Argon and any remaining moisture to be absorbed by the desiccant pouches. After twenty-four hours have elapsed follow the same procedure to seal down the sensor chamber as described above. This method is not nearly as efficient as Argon low/hi pressure purging in a sealed environmental enclosure but it is almost as effective, and, comparatively inexpensive and easy to carry out a home.
  23. Cleaning the sensor (cover) glass is nothing to be concerned about, the official ZWO cleaning document that @LandyJon linked to describes the steps clearly. The steps required are really not that different to those that owners of DSLR's who regularly change lenses have to do many times a year, the only practical difference is that with a DSLR you only need to remove the lens, open the shutter and lock-up the mirror to reach the sensor, with your camera you have to unscrew the chamber cover to reach the sensor, and, with a DSLR there is a comparatively soft and easily damaged antialiasing and IR filter array directly over the sensor cover glass, in your ZWO camera those additional filters are not installed and the harder quartz-glass sensor window is exposed. Alan, @symmetal, describes the minimal effect on dark current of running at temperatures above 0c and while it is true that the difference to linear dark current is very small with increasing sensor temperature of these CMOS cameras you might see slightly more "pixel speckle" in the resulting images due to the non-linear response to increasing sensor temperature of warm pixels. That effect can be mitigated though with dithering and a sigma reject algorithm used when combining the stacked images. The only concern with leaving the sensor with moisture "seed" traces on the sensor cover glass and running just above 0c is that you risk moisture building up on the backside of the sensor where the cold finger from the TEC cooler abuts the sensor back plate, this area will be a few degrees below freezing while the sensor itself will be at 0c and if the moisture on that interface lingers it can spread over time and corrode the tracks on the PCB. If you run a cooled camera at 0c to avoid the appearance of ice on the front of the sensor glass you'll never know how damp the sensor chamber is becoming. The sudden appearance of ice crystals on the front of the sensor when running just below 0c is a useful, albeit a nuisance, indicator of the humidity levels inside the sensor chamber. William.
  24. I may have misinterpreted your opening topic but I think that you have two separate questions here. 1. Can I reach inside the SeeStar, or it's output .fits image, and grab the individual frames that SeeStar stacked internally so that I can build an image using more advanced techniques? 2. What I can I do with the standard output .fits image from the SeeStar to give more pleasing image than is possible with the SeeStar's basic .jpeg output image? I do apologise if I've got that wrong and if so you can ignore my response below.... ------------- The SeeStar's jpeg format export option is a compressed image format and depending on its bit depth you could enhance that image using various filters and tools in many "Photoshop-type" general-purpose image processing applications. The limiting factor of that approach will be the compressed bit-depth of colour jpeg images, there is only so much you can do before the image starts to look "pixelated" and noisy. The .fits format image is uncompressed raw data and although for the SeeStar we don't know whether ZWO apply any kind of automatic dark current and bias subtraction to the output image nevertheless you can extract more information and produce a more pleasing image (I think) by post-processing the image in an application that is specialised (or optimised) for handling .fits images. The output .fits image from the SeeStar is already multiple frames stacked internally, as others have already explained above, and AFAIK you can't disassemble that image any further. I'm not aware of any methods that allow you to reach inside the SeeStar itself to extract the individual frames that it used/uses to build its stacked output image. You mention above that you have previous experience with Deep Sky Stacker. For a single (internally stacked) output .fits image from the SeeStar then Deep Sky Stacker has no useful purpose, but if you were to collect multiple (internally stacked) SeeStar .fits images of the same target then you can register and stack those together in DSS to reduce noise further and allow you to stretch the resulting image more to reveal fainter detail. With a .fits image, either a single frame that was internally-stacked in the SeeStar, or an externally registered and stacked combined image (DSS etc), created from multiple SeeStar internally stacked images, then you can do much more in post processing. I've attached two images below using your raw .fits file, one of which was very quickly (less than five minutes) post-processed in PixInsight, colour calibrated, BlurXterminator and NoiseXterminator applied then stretched and colour saturation increased, finally Histogram transformation applied. I've only applied a very basic process to your image, there are multiple tools available to you once you get to work on .fits images, it just depends how much time you want to invest in post processing and how much time you are willing to allocate to gathering as much raw data on a single target as possible. The second image shows the raw .fits image straight from the SeeStar with the same level of stretch applied. I hope that gives you some idea of what is possible in post processing the .fits image, and, as described above if you had multiple internally-stacked .fits images you could stack those together which would allow you go further in post processing. HTH William. Processed .fits in Pixinsight. Raw .fits from SeeStar.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.