Jump to content

Narrowband

han59

Members
  • Posts

    407
  • Joined

  • Last visited

Everything posted by han59

  1. Yes that is the most likely cause. 1391 x1039 should solve without warning.
  2. The message means that the pixel dimensions are too low. Assuming your camera has a resolution of 1391 x1039 this message should not appear. So the likely explanation is that you set the binning at 2x2 in Nina solver settings. Just go in Nina to the ASTAP solver settings and set binning at 1 or 0 (auto). Then this warning message should go away and solving should be more reliable. Han
  3. ASTAP version 2024.03.07 is out. I have created a Youtube video to demonstrate the "track and stack" function.
  4. Hi Paul, For Track and Stack use version 2024-02-25a as indicated under the menu "about". It was released in the evening.
  5. With a camera the best option is probably the much higher background value in case of clouds or loss of tracking. For rain you need clouds. Some programs like CCDCiel can take action if the tracking is lost. The problem with PHD2 is that it does not recover. So after a small problem it will loose tracking permanently even if it is not required to stop The internal guider of CCDCiel however recovers automatically. So after a minute without tracking and no recover of the internal guider you could program CCDciel to take an action. Han
  6. FYI, the free ASTAP program has two new features. The first is Track and Stack for minor planets and comets. This new feature allows to compensate for the velocity of these object and keeping them centered in the stack. This allows to improve the signal to noise ratio. The program can also extract the time and position in a MPC1992 report line to check the object in the minor planet center MPCchecker. Below a normal stack compared with Track and Stack. For the 60 x 120 seconds stack the two minor planets are vague streaks. The obsolete remarks indicates that the MPCORB database is obsolete. That is why the two minor planets are out of the center of the annotation: The manual for Track and Stack is here: http://www.hnsky.org/astap#trackandstack The second change is the possibility to add a SIP 3th order polynomial to the solution. This improves astrometric measurements and also improves the stitching of a mosaic. Feedback is most welcome. Han
  7. Nice. For my Phone, a Samsung S7 the access to the internal camera works. Installed version beta 25. Han
  8. In the last development version, I changed the scientific notation of the SQM value to an easier tp read notation. So in your table it will report something like SQM=18.63.
  9. I don't see it here. Let's check some steps to find the cause: 1) Does the viewer menu TOOLS, SQM REPORT ON AN IMAGE work? 2) If you open one of the batch processed images, does it have a keyword SQM? Once you open a file you can see the header in viewer. It should have a line like this: SQM = 1.925747011041E+001 / Sky background [magn/arcsec^2] and this line COMMENT SQM , used 900 as pedestal value Han
  10. The column requires the SQM value in the header. In the viewer, first batch solve the images with option add "Limiting magnitude and SQM" to the header activated. See screenshot Han
  11. Yes just make twice a master dark of 12 darks. So you need then 24 darks. In ASTAP, tab Pixelmath2 there is an option Apply file to image in the viewer. Select a second file and option "subtract file view + 1000". The 1000 is to keep positive values. Only for the latest Sony sensors with cooling you could consider to skip darks. But with darks the image will always be cleaner & hot pixels are removed. Only with a dark applied the flat correction will be optimal. So I still would go for full calibration with darks, flats and flat-darks. I think it good to keep a dark library of different temperatures. Old darks from year ago will still work fine. In ASTAP the correct dark with an corresponding temperature will be selected automatically. So you could keep them all in tab darks. Han
  12. In this post I tried to answer three questions around the calibration of lights: 1) Are darks required? 2) Should the dark temperature match the light temperature? 3) If not can the dark be scaled using a prediction of the dark current? The experiments where done with dark series of an ASI1600MM camera. To get a measurable result two series of stacked darks where subtracted from each other. One represents the lights, the other the darks: First some visual tests: First image, no darks gives a pretty high noise value. The second image based on ∑(12x60sec_+26°C) - ∑(12x60sec_23°C) gives the lowest noise value. So conclusion is that 1) darks help and 2) darks and lights temperature should match. Next a more extensive test presented in a table: You can see that the noise value for ∑(12x60sec_-1°C) - ∑(12x60sec_-1°C) is as good as ∑(12x60sec_-10°C). . So conclusion is that 1) darks help and 2) darks and lights temperature should match for lowest noise value. Next question 3) . Can darks be scaled? I tried to find the best X factor in subtraction of two dark series. E.g : ∑(12x60sec_+5°C) - X * ∑(100x60sec_0°C) The experiment was done in a custom adapted version of ASTAP which varied X in small steps to find the lowest noise value: Conclusions: 1) Make at least 50 or more darks to reduce the dark noise. See part 1 and 2 (pdf file). 2) It is possible to correct for a wrong dark temperature. Significant improvement above +10°C. Below +5°C the effect is limited 3) A wrong dark exposure time has only some influence above +10°C. See part 5 (pdf file) Full test report: Han dark_test2.pdf
  13. Yes, use as many darks as possible to remove the noise and keep pixel inequality. I have done an extensive automated test to find the best scaling factor . The noise of two darks was measured e.g: ∑(12x60sec_+5°C) - X * ∑(100x60sec_0°C) at the best X factor was found empirical. See attached report. Table 1 contains the best factors. Table 2 contains the noise if no scaling is applied. Table 3 contains the noise if best scaling factor is applied. Significant improvement above +10°C. Below +5°C the effect is limited The following empirical formula works well for my ASI1600: factor:=1/(exp(Δt * 0.1) corrected dark:=dark * factor - mean_dark * (1-factor) I didn't have time to test it with real lights. See pdf file for more details. dark_test2.pdf
  14. The results of the trial and error method for darks at -1C and -10C are displayed in the bottom part of the screenshot but still no improvement. A part of the noise is the read noise which will be likely stable. Which factor would you propose?
  15. I did an other test with scaling the darks. This only works for dark above 0 degrees Celsius when the dark current is significant. The factor 4.34 comes from 17.8 e-/4.1 e- (noise 26°C/noise 11°C)
  16. I have exported four images as 16 bit PNG unstretched but first shifted the mean background to level 1000. Then combined them into one image using Gimp. Exported again in 16 bit PNG and stretched in ASTAP. See below the results. I have attached the unstretched 16 bit PNG image in a ZIP file. You can measure yourself. How, I don't know at the moment. But I assume it becomes more critical at higher temperatures. The slope delta noise/delta temperature becomes steeper and steeper. I haven't tested the amp glow. The images where all taken from sensor center 250x250 pixels. combined level 1000 v3.zip
  17. Today I tested the criticality of the dark temperature compared with the light temperature. I noted that calibration with master dark of a lower temperature gives a poorer result then calibration with a master dark of equal temperature. See noise and hot pixel values below. Han
  18. Thanks for the feedback. I noted again that the linearity test finds a great offset at zero exposure. I see the same in my test. This requires further investigation. You could argue that changing the exposure time is not ideal. The light source intensity should be adjusted. Maybe a led should be used as light source and the voltage and current recorded assuming the efficiency is constant which it is probably not. But then the test would involve more into a lab test then a simple test. Han
  19. No problem. The installer is now updated. Index of /tmp/ccdciel (ap-i.net) Han
  20. Yes that is it. In practise a few hundred stars are used and some statistics are applied to get a common magnitude to flux ratio. For photometry you compare the variable star with a reference/check star. For SQM measurement you compare a star or stars with the increased background value.
  21. Okay, I have created a fix and uploaded the new CCDCiel executable to my own webpage: www.hnsky.org/ccdciel.zip You have to extract the executable and then move it to C:\Program Files\CCDciel A version with an installer will probably come tomorrow. https://vega.ap-i.net/tmp/ccdciel/ That is done by Patrick. Tell me if this works. Han
  22. Thanks. I got the flats. You could delete them to save space on this forum. The flat show a typical unbalance between the colours. Green is about 38000 adu, blue is 30000 adu and red only 15000 adu. Did you use an electro-luminance panel for these flats? I'm pretty sure if the standard deviation calculation uses only the green pixels then the measurement for gain, full well capacity and read noise will be more in line. I will work on it. Han
  23. For my mono version ToupTek IMX571 sensor camera, I measure a read noise of 1.086 e- at minus 10 degrees Celsius sensor temperature. I assume you get a little higher reading due to the sensor temperature introducing some extra noise. Could you try is at a lower sensor temperature? The full well difference is strange. It is calculated as follows: FullWell_capacity[e-]:=sat_level[adu]*gain[e-/adu] The sat_level is 65535 so it is due to the gain which in your case is 0.335 e-/adu. This gain is calculated as follows: σ_light[adu]:=STDEV(light1-light2)/sqrt(2) gain[e-/adu]:=flux[adu]/sqr( σ_light[adu]) (formula 3) It has probably something to do with the Bayer matrix. Can you share two flats made with your camera at gain 100 and a low temperature which I could analyse? Han
  24. Vlaiv thanks for the explanation. The reason that the telescope focal ratio, exposure, camera quantum efficiency and light losses do not play a role because the stars are used as reference. Sky glow and star light are both going through the same instrument, your telescope and camera. So if your instrument is less sensitive both the sky glow and star light will reduce the same. The star measurement provides a calibration factor, magnitude ==> flux. The star flux is the sum of pixel adu values illuminated by the star. A defocus will not play a role either. The star flux stays the same. The sum of the star adu values stays the same. Solving is required to identify the stars an their magnitudes from the database. The sky glow is increasing the background value of the image. This is also a flux. The background flux in one square arc second can be converted to magnitude resulting in the SQM value. To calculate how many pixels there are in one arc second you need to solve the image. Probably less then one pixel but mathematical it doesn't make a difference. The only thing which could influence the calibration is a difference in colour and altitude. The colour difference is ignored. The altitude is compensated. So at lower altitudes the program assumes the star light is reduced equal to the amount of air mass. The sky glow is assumed constant. While writing this maybe this last assumption is maybe not 100% fully valid. But an SQM measurement you would not do at very low altitudes of 30 degrees or lower. Han
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.