Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

han59

Members
  • Posts

    408
  • Joined

  • Last visited

Posts posted by han59

  1. 23 hours ago, mgutierrez said:

    Han, I have another question. I don't know how astap really makes the calculation. I thought it was based at least partially on the background level. But it seems there are more factors. In any case, I don't fully understand why (apparently) the focal ratio of the tube does not play a role. At the same exposure time, normally a lower F would have a higher background level than a slower one

    Vlaiv thanks for the explanation.

    The reason that the telescope focal ratio, exposure, camera quantum efficiency and light losses do not play a role because  the stars are used as reference. Sky glow and star light are both going through the same instrument, your telescope and camera. So if your instrument is less sensitive both the sky glow and star light will reduce  the same.  The star measurement provides a calibration factor,  magnitude ==> flux. The star flux is the sum of pixel adu values illuminated by the star.  A defocus will not play a role either. The star flux stays the same. The sum of the star adu values stays the same. Solving is required to identify the stars an their magnitudes from the database.

    The sky glow is increasing the background value of the image. This is also a flux.  The background flux in one square arc second can be converted to magnitude resulting in the SQM value. To calculate how many pixels there are in one arc second you need to solve the image. Probably less then one pixel but mathematical it doesn't make a difference.

    The only thing which could  influence the calibration is a difference in colour  and altitude.  The colour difference is ignored. The altitude is compensated. So at lower altitudes the program assumes the star light is reduced equal to the amount of air mass.  The sky glow is assumed constant. While writing this maybe this last assumption is maybe not 100% fully valid. But an SQM measurement you would not do at very low altitudes of 30 degrees or lower.

    Han

    • Thanks 1
  2. The linearity test of OSC cameras should now work better using the latest nightly build:

    Index of /tmp/ccdciel (ap-i.net)

    In case of an OSC camera it uses only the green sensitive pixels for the linearity test.  The idea of testing the red and blue channel was abandoned because it made the code too messy.

    It would be nice if it can be tested.

    The program also reports the percentage of pixels suffering from RTN (Random Telegraph Noise). When I test my ASI1600 I get about 0.18% RTN but for the ToupTek IMX571 camera it reports more then one percent of the pixels suffering from RTN.  It is pretty independent of the exposure time. Next step could be to report the RTN flux. But that requires more testing to understand the behaviour. 

    Han

     

  3. Yes OSC camera requires an other strategy.  If I remember well SharpCap is using only the green channel. Probably the best approach will be to separate the raw OSC pixels in Red, Green1, Green2, Blue or  Red, (Green1+Green2)/2, Blue sensitive pixels. Creating four or three graphs is not a problem. Then a blueish or reddish flat panel does not effect the measurement. I will work on that.

    I did a test with my ToupTek darks. A stack of 9 darks of 300 seconds show clearly hot pixels. A difference between two dark stacks seems to remove hot pixels reasonable. So darks are still essential in the image processing.

    An other thing which can be reported is the amount of hot pixels or RTN pixels. According this report 2% of the IMX571 pixels are effected by RTN (random telegraph noise).

    Han

     

    A stack of 9 darks of 300 seconds exposure:

    9darks.jpg.e8477b8df95e535c64c9b8aff5064370.jpg

     

    The difference of two stacks of 9 darks of 300 seconds exposure:

    9darks-9darks.jpg.1bbae631d6a6e310a08025f7538c3a17.jpg

     

     

     

     

     

  4. 4 hours ago, ONIKKINEN said:

    Regarding the read noise thing, what implications would this have for an imager? Should i treat the camera like it had 1.6e RN instead of the simple measurement of 1e?

    I don't think you can do much with it. It only indicates that things are not so ideal as it looks. The main problem is that these pixels look like faint stars and not like noise. I still have to do an experiment if stacking removes them effectively. E.g if you stack twice 20 darks and calculate the difference between the two stacks then hopefully these RTN is gone.  The the same will happen for 20 lights corrected with 20 darks.

     

  5. An updated CCDCiel has been released. The sensor analysis tool has been improved to cope with "random telegraph noise" (RTN), also called salt & pepper noise. This is what is measured using my ToupTek IMX571 camera (ATRC3CMOS26000KMA):

    readnoise.thumb.png.d60ccc80fe0a479f01b6b8c0a0a44ea5.png

     

    The read noise for the HCG mode at gain 100 is reported to be around 1 e-. However if you measure the actual noise it is higher around 1.6 -e. This is caused by "random telegraph noise". This manifest itself as flickering hot pixels. It is clearly visible if you calculate the difference between two darks or bias frames. Several hot pixels will be visible because they are flickering and are not removed by subtraction. Only if you measure the noise using a sigma clip routine you get the pure read noise. Mathematical as follows:

    read noise including RTN := stdev(bias1-bias2)/sqrt(2)

    read noise excluding RTN := stdev2(bias1-bias2)/sqrt(2)  where stdev2 is using a sigma clipping to remove hot pixels.

     

    This is also visible in the total noise  which is defined as sqrt(sqr(read(noise)+sqr(RTN)+dark_current):

    totalnoise2.thumb.png.217bc0d356980dc3c3c57ba2b912776e.png

     

    There is almost no RTN in my ASI1600 camera:

    analysis3.thumb.png.ecaa7e2bcedca442ec6c6f4ebf7efb22.png

     

    An other thing noticed while testing the ToupTek IMX571 is the charge persistence (cmos image lag). It is wise to remove the first dark since it has a higher median/average value then the following darks.

    The bars indicate the median dark pixel values. This dark series is taken directly after a flat. If the first bar is higher then the second bar (both are for the same camera gain) then the camera is having some charge persistence. So there is some charge left from the previous flat image. The height & values of the other bars can be influenced by the gain setting.

    chargepersistence.thumb.png.bfb05b2d74e9c14f26689fa145e6b165.png

     

    There is no charge persistence in the ASI1600 since the first bar has the same value as the second bar:

    analysis5.thumb.png.007e11c5ccb4cc62a58683e0f8f6ce9c.png

    • Like 2
  6. An image with dimensions 1920 * 1080 should not generate a warning. Are you sure there is no binning at two places? One in Nina itself and one in the Nina ASTAP settings?.  To be save set the binning for ASTAP at zero which is the automatic mode.

    D80 is probably overkill for 0.5 degrees field-of-view but it doesn't matter. Jou can delete/uninstall the D50.

    Han

     

  7. In the latest beta version of CCDCiel under menu tools there is a new menu called "sensor analysis" where you can test your camera sensor for read noise, linearity and some other parameters. This is not for everyone but if you like to verify the reported values of your camera you could try. It only requires a flat panel or substitute and will take 15 minutes or less.

    Feedback is welcome.

    Han

    Download latest beta version of CCDCiel:
    https://vega.ap-i.net/tmp/ccdciel/


    Documentation:
    https://www.ap-i.net/ccdciel/en/documentation/sensor_analysis

    analysis1.png.a286300199744cd6dadd9424fd4ab712.png

    • Like 3
  8. The latest OpenLiveStacker version works well with my ToupTek aps-c IMX571 mono camera. (ATR3CMOS26000KMA). I have the impression that image download works faster then with my ASI1600. The streaming of the highest resolution 6224 x 4168 shows a high frame rate like video. 

    Some remarks:

    I could not find a setting to switch between HCG and LCG mode. That is something specific for this camera. Not a real problem.
    During daylight testing I have the impression that auto exp. does not work well for binning 8x8 or the camera is overloaded with light.
    This camera works only when 12VDC is applied like many other cameras. The ASI1600 you can operate without 12VDC and cooling.

    Han

    OpeningXRecorderLite_20072023_131812.thumb.jpg.f20edf68ab7943f3a356d226f03f34aa.jpgbin1x1XRecorderLite_20072023_124421.thumb.jpg.63e706aa721f888edff7cf83c230793c.jpgbin4x4XRecorderLite_20072023_125046.thumb.jpg.b77f369a39fa89d76445dfa2eaeffd64.jpg

     

  9. This night I could test again OpenLiveStacker with my 58mm camera lens on my ASI1600,  uncooled.

    Stretching during streaming worked great. Now I get a good view of the sky. See in the video below the satellite moving through the sky. The video below was made with the camera looking up from a fixed position. Please make stretch default on.

     

     

    ROI,  "regio of interest" works goods. But if I change from format "4656x3520,  bin 1x1" to format "2328 x1760,  bin 2x2", my field of view halves from 13 degrees to 6.5 degrees. (bug) g). Solving could fail due to this. You can see the tree disappearing when I change format.

     

     

    Exposure time is remembered in simulation. But using the ASI1600 the last used gain, format and exposure are not remembered.

    Using the SD card option gives still a runtime error after a second restart.

    The backspace of my lens in not optimal. Therefore the corners of the images are distorted. I have ordered other T2 adapters to reach the required backspace distance.

     

    Han

    • Like 1
  10. Hi Steve,

    There are two type files. 1) Files already in colour and 2) RAW colour files still in a mono format. See below.

     

    1) Colour files which show already contain the image in the three base colours, red, green and blue (the image is debayered).  The same information you find in a jpeg file. 

    To convert this type to mono, load the file in ASTAP and then use the viewer pull down menu TOOLS, "Convert to mono (ctrl+M)"

    Example raw:

    raw.thumb.png.d50228465b2166e11e29677f96aaaccc.png

     

    Converted to colour (without smooth):

    colour.thumb.png.75aa0450fffdd1899e92de4ff0733894.png

     

    Converted to mono

    colourtomono.thumb.png.16fa62e8bcdde2fccde79edcb15f9a50.png

     

    Now again but just normalise the RAW file:

    2) RAW OSC camera files. These file from an OSC=One Shot Colour camera are not converted to colour yet. They have pixels  which by filter are only sensitive to red, or green or blue. Often in a Bayer matrix  

    These RWA files are still mono but the best way to use them is to equalize the  red, green and blue sensitive pixels. This is what I call normalise.  It adjust the levels such that the mean red, green, green and blue levels are the same assumiing the light received is about white.   By doing so it eliminates the typical checker pattern.  Note that this option  has moved to the pixelmath tab in the latestes ASTAP version.

     

    Example the raw again. The checker pattern is visible (for some cameras it is worse):

    raw.thumb.png.d50228465b2166e11e29677f96aaaccc.png

     

    Now the raw normalised. Note the sharper stars. The noise pattern is cleaner. This is the best way to use a colour camera as if it is a mono camera:

    normalised.thumb.png.788799adc37b623c81d6546f42590536.png

     

    You could also bin2x2 the raw image but you will loose resolution. So do this only if the image is oversampled:

    rawbinned.thumb.png.662d84d17f0ba8d5090fd7d6d2403da2.png

     

    In the latest ASTAP the normalise menu option has been moved from tab "stack method" to tab "pixel math 2" (CTRL+A). It normalises the RAW image in the viewer. This by making the mean value of RGGB sensitive pixels the same:

    normalise.thumb.png.b3cb5d2021baf96f9cec0866e5190a4d.png

     

     

     

    Han

     

     

    • Thanks 1
  11. Is this understandable?

    Pre-conditions
    1) Image is astrometrical solved. (for flux-calibration against the star database)
    2) The image background value has measurable increased above the pedestal value or mean dark value.
        If not expose longer. This increase is caused by the sky glow.
    3) Apply on single unprocessed raw images only.
    4) Providing dark image(s) in tab darks (ctrl+A) or entering a pedestal value  (mean value of a dark)
         increases the accuracy. If possible provide also a flat(s) in tab flats. Calibrated images are also fine.
    5) DSLR/OSC raw images require 2x2 binning. For DSLR images this is done automatically.
    6) No very large bright nebula is visible. Most of the image shall contain empty sky with stars.
    7) The calculated altitude is correct. The altitude will be used for an atmospheric
        extinction correction of the star light. The altitude is calculated based on time, latitude,
        longitude. Note that poor transparency will result in lower values compared with
        handheld meters.

     

    Han

  12. I will phrase it differently.  It should be measurable higher then the pedestal. Assume your pedestal value is 1000 ± noise. For a short exposure the sky background introduces maybe 1 extra resulting in 1001 ± noise. This  could be difficult to measure. The pedestal could drift or the sky background signal could drown in the noise or being significant less then the  flat correction . A sky background addition of 100 will likely be measurable resulting in 1100 ± noise.

    So for short exposures the SQM can be measured but could become unreliable. 

    Calibration is best for accuracy.  You can apply the routine on calibrated lights and enter for the pedestal 0.  Or  provide darks (and flats & flat darks) in corresponding tabs for automatic calibration .  Else type in the pedestal value manually based on a dark. How better the routine can measure the sky glow influence the more accurate the SQM value will be.

    Han

     

     

  13. Hi David,

    Reads like an interesting experiment. The Moon will increase the sky background level,  but that is depending on air humidity and dust in the sky.  The elevation of the mount can be retrieved from the typical FITS header. ASTAP will try to compensate for telescope elevation by calculating the airmass and loss of star light.

    It would be nice to link the Unihedron SQM measurements to the images to compare the results later. Keep me informed on the results :)

     

    Note ASTAP v 2023-2-26 was a bad one for SQM measurment but it was only 2 days available. Older and new versions should work fine.

    Han

     

  14. 7 hours ago, malc-c said:

    Just want to clarify something.  If by ASCOM you mean EQASCOM (EQMOD) then you can run it in simulation mode from the Toolbox option contained within the EQASCOM folder.    I know ASCOM as the platform that runs in the background to handle communications between software packages.

    toolbox option, did know that is exists. Looked to a 12 year old video to understand.

     

    This is not what I meant.  You need a simulated star image based on the mount position as feedback to feed the guider. The "Sky simulator for Ascom" can do that. This simulator reads the mount position and creates a corresponding star image like a planetarium program. That star image can be read by the guider and then the control loop is closed. The simulator can add disturbances in the loop as required for the simulation.  The connections are as follows:

    Untitled.png.7deffd15fca131afd986631d31956800.png

     

    Han

  15. AstroMuni, 

    We are drifting a little off topic but comparing guiding algorithms is pretty difficult.  It would be nice to have two identical mounts running at the same time imaging the same object.  But that is not very practical. My HEQ5 performance also depends a lot of the azimuth altitude position. In some positions it it performs good and in other position less good. What worked best is to run the guide algorithms in simulation and apply step responses or noise and look to the guiding results. I did that to compare the CCDCiel internal guider with PHD2 using my Sky simulator for Ascom & Alpaca. I never tried Ekos.

    Han

     

  16. Malcom,

    I got feedback that EQASCOM had a timing problem in the very beginning of the development. I can imagine that the VB compiler could play a role.

    A faster serial connection could theoretically help. My HEQ5 is about 10 years old and as far I remember you can't change serial communication speed. You could also evaluate using the ST4 autoguider port instead of the serial port.  But I assume the lag in both the serial and ST4 is normally small and there is not so much to gain. The Bluetooth connection is probably an outlier.

    It would be nice if they would allow to transmit the number of steps for the step motor. That would take communication delay out of the control loop. Now the serial connection is doing more or less the same as manual guiding with four buttons or the ST4 interface. This proposal would probably also require an ASCOM extension.

    I have experimented with "slew to" position commands for guiding but that works miserable for guiding.

     

    AstroMuni,

    I was not aware of the Gaussian Process approximation. I assume it does something similar as the Hysteresis setting. Looking to the past to predict the future. I have experiment with some algorithms but returned to the ResistSwitch algorithm. This one looks pretty optimal. More improvements are only to be made with a better and more expensive mount. Not with guiding software. I have installed the so called belt modification on my HEQ5 but still the movement is on arc seconds level not smooth. Still I'm happy with the mount but there are better and more expensive mounts.

    Han

  17. The above test didn't involve guiding on a star. It just analyses the mount indicating position changing after applying guide pulses. So apply a pulse and see how in ASCOM the RA, DEC position changes. The problem is that it is not a digital controlled process. EQASCOM or GS server can only instruct the stepper motors to start and stop and set the speed. The timing is analog and not stable due to an unpredictable lag in the communication. That is a pity. That is why a cable with shorter lag time works better then Bluetooth connection. It would have worked better if you could instruct the mount digitally to step some many motor steps. Then the communication lag would have been irrelevant. But that is not possible with HEQ5  or EQ6 mounts.

     

    About guiding:

    The CCDCiel internal guider uses a simular algorithm as what they call in PHD2 ResistSwitch. It is described in the documentation.

    Personally I don't have any experience with EKOS guiding but noted the PI or PID control algorithms do not work for pulse guiding except for the P action. The reason is that pulse guiding process is an integrating process. After a pulse correction the offset is permanent gone. A PID controller algorithm you normally use for a linear process. So if there is an offset you have to apply a permanent correction. This is achieved by the I-action of the controller. For the pulse controlled guiding process the I-action is counter productive.

     

    Han

     

     

  18. For the internal guider development for the CCDCiel program, I did an extensive test of the accuracy of guide pulses. So how accurate does the mount move after a sending a guide pulse.

     

    The measuring principle was simple. Send guide pulses of different length to my HEQ5 mount and measured the indicated position change. For this I wrote a test program to send ASCOM pulses to the mount and read the resulting change in RA axis expressed in "arc seconds axis rotation" and write the results to a .csv file for analysing in a spreadsheet.

     

    After the test I came to the following conclusions:

    1. GS server pulse accuracy is better then EQASCOM.
    2. Guide pulses below 40 ms equals  0.3 arc seconds at 0.5x rate are pointless, The error is often bigger then the correction. The pulse effect (gain) seems also larger for short pulses then for the longer pulses. See trend lines in graphs below. Note that the stepper accuracy of the HEQ5 pro Synscan is 0.14". At 0.5x rate, the 0.14" minimum step is about 19 ms.
    3. A Bluetooth connection is muss less accurate for guiding then a wired USB_to_serial cable!!
    4. A guide rate of 0.5x tracking rate is fine. For higher rate the pulse become too short. For lower rate the pulse the pulse accuracy doesn’t increase.

    Since then I switched from my Bluetooth connection back to a traditional USB to serial cable and switched from EQASCOM to GS server.

     

    Han

     

    Test results in graphs:

    GS server versus EQASCOM up  to 1000 ms pulses for an USB_to_serial and Bluetooth connection:

    889744296_GSserverversusEQASCOMupto1000mspulsesforanUSB_to_serialandbluetoothconnection.thumb.gif.8253e12a546be22591a4bec870a19cb9.gif

     

    GS server versus EQASCOM up  to 300 ms pulses for for an USB_to_serial and Bluetooth connection:

    1466338494_GSserverversusEQASCOMupto300mspulsesforforanUSB_to_serialandbluetoothconnection.thumb.gif.10f99df767756d344ab1fd6b17cf635a.gif

     

    Bluetooth connection performance east and south:

    687784584_Bluetoothconnectionperformanceeastandsouth.thumb.gif.49411a9f7d4a11d7ee1cd0080e1c78fd.gif

    • Thanks 1
  19. Several programs can read the observation time from DSLR raw files. For example Pixel Insight, Siril and the latest ASTAP can convert the RAW file to FITS and report the time of observation behind the keyword DATE-OBS. Something like this:

    DATE-OBS= '2016-12-31T19:36:38.000' / [UTC] The start time of the exposure  

    The time behind DATE-OBS is normally the start time of the exposure as recommend by the FITS 4.0 standard. While implementing this in ASTAP, I have doubt this is always true for the time extracted from a RAW file (exif info). I got one report this is true for one camera producing CR2 raw files. But raw files made with my S7 smartphone the recorded time is the end time of the exposure. I like to check this for other DSLR cameras and I'm looking for help by DSLR camera owners to test this.  Testing is in principle easy. Set the time correct in your camera, make a long exposure like a dark and manually record the start time. Then check if the reported time in the RAW is at the beginning of the end of exposure. Or if the file date is correct compare it with the file date. Feedback on this topic is most appreciated.

    Han

     

     

  20. Do have both stacks results the same image dimensions in pixels?

    The factor 2x or 3db noise power difference could indicate a 2x2 binning difference. An 2x2 binned image will have this noise improvement for stars and deepsky objects. 

     

    Quote

    There is no single SNR number for the image. Every pixel has SNR as SNR is ratio of two quantities - signal and noise and both vary across the image, so no reason to think that there is single universal SNR across the image.

    I haven't found working definition for this either but the PI team claims they have one. If you have two images of the same object you could compare the SNR of the imaged object but else not.  The image limiting star magnitude at 7or 10  sigma is a much better quality measurement. This works less for deepsky but you could divide the limiting magnitude by the square median FWHM or HFD values of stars (surface) for the image quality definition.

    Han

     

     

     

     

    • Thanks 1
  21. In ASTAP you can convert colour files to mono, so single dimension. This menu can be found under main menu Tools (ctrl+M).

    You can also normalise raw OSC images. This routine makes the mean value of the R,G,G and B pixels equal. See tab "stack method" (ctrl+A) button "test normalize" This removes the typical raw checker pattern of raw OSC images.  It is intended to normalize flats but can also be applied on lights.   This normalize will keep the final image sharper then converting the debayered image to mono.

    Han.

    • Thanks 1
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.