Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

symmetal

Members
  • Posts

    2,406
  • Joined

  • Last visited

  • Days Won

    3

Everything posted by symmetal

  1. Hi Craig, I just rechecked the maths for your normal gain 100 setting on the 533 and the optimum sky background ADU value is actually 2890. This is much more reasonable. 😀 This is 93 ADU higher than your dark frame mean ADU, and not the 26 ADU I said previously calculated. My 071 OSC camera for a similar gain setting has the sky ADU 205 above the dark. This is reasonable as it doesn't have the HCG mode so has significantly higher read noise. My 6200 which does have HCG has the sky ADU at 82 above the dark. All comparable figures. Sorry I got the figure wrong before, I had so many windows open I probably read the wrong graph at the time. So, for gain setting 0 your sky background ADU is 2986 with the same offset 70. This is almost the same as at gain 100. This is because the high offset is contributing to the bulk of the value calculated. 2800 ADU is purely because of the offset of 70. At gain 100, the extra 90 ADU is to combat read noise. At gain 0, then an extra 186 ADU is needed to combat the read noise. When doing test exposures, it may be easier to subtract the offset 2800 ADU from your mean results, to get the actual change in ADU due to the exposure time itself, and you can then estimate the right exposure needed more easily. Alan
  2. No, if you alter the gain the level of read noise changes so the optimum sky background ADU will change as well. You can do the calculations as detailed in the other thread I mentioned above, or let me know what gain values you might use and I'll give you the new values. Changing the offset also alters it as well, but it's easiest to keep the offset fixed and yours is such a high value anyway keeping it fixed won't be a problem. Having the offset too high is not such a big deal. It just means you lose a bit of dynamic range as the whole histogram has been shifted to the right a bit. The 533 HGC mode is turned off below gain 100 so the read noise is noticeably higher then, requiring longer exposures to 'neutralize' it. 🙂 Alan
  3. While tiff can store multiple layers I'm not sure as to whether all the layer or mask attributes including advanced blend modes, layer linking and locking, etc. will be saved in the tiff file. If info will be lost when saving, Photoshop usually warns you though, so if you haven't had any warnings you should be OK leaving it in tiff. I imagine PS is more efficient in storing its own image data, so creates smaller files than tiff which is a more generic storage format. Alan
  4. I agree with Merlin66 that it's worth trying Autostakkert as that's generally the preferred choice at the moment for video frame stacking. I can't say why you had trouble with your .ser file in Registax, but converting it to .avi via PIPP is not a lossy conversion, so no data is lost if it's an 8-bit video. AVI cannot encode 16-bit data so am not sure whether PIPP would refuse to convert 16-bit ser files to avi, or whether it gives the option to convert to 8-bit first, which means image detail may be lost. It's worth noting that the avi format is only a container format for one or more audio and/or video streams, and the streams themselves can be lossless or lossy depending on the actual encoding format used. A video converter program should give the option of what encoding format is to be used when an avi file is to be created, but some downloadable free ones don't. PIPP, however does generate lossless avi files, (as long as the source video is 8-bit). 🙂 Alan
  5. Thanks for doing the test Craig. That's a relief. I was doubting all my calculations and that I was giving you false information. 😀 I don't know the reason Zwo has chosen such a high offset ADU for the 533 camera but there must be a reason, so I should leave it at the default value of 70. Perhaps at the extremes of gain settings, should you wish to use them, the high offset is necessary to avoid possible clipping of the blacks. Anyway, your new optimum sky background ADU of 2823 is still way below your current 5000 ADU so I would at least try using a sequence of shorter exposures of 1 minute as in my test mentioned previously, and see if you get better results. Also, next time you're out imaging, you could just see what exposure gives a mean ADU of around 2823. If it's only around 15 sec or so it implies that the read noise using the camera's HGC mode is low enough that it's insignificant at very low exposures. To save ending up with a 500 15 sec exposures to stack I'd probably stick to a minimum of a minute for trying out to avoid needing large hard disk storage and taking ages to stack. 😀 Finally, to see if the 0 values go after calibration, you could try using a bad pixel map when calibrating. Alan
  6. That's a high dark frame mean value and a corresponding high minimum. I can only assume that your offset value increases the camera native ADU by 10 for each increase of offset by 1, like my 6200. This calculates to an optimum Sky background ADU of 2823 which is closer to your readings but is only 26 ADU above your dark mean value. My 6200 sky background optimum ADU is 82 ADU above the dark mean value of 501. As my camera is not unity gain at that point that could account for the difference. 🤔 If you wish, but really to satisfy my curiosity, 😄 you can do the test I mentioned above of taking two darks, or bias, with one of then having the offset increased by 10, and see how much the mean pixel value increases between them. If it increases by 400 then my assumption above is right and the camera is meant to work with high offset values for some reason. Alan
  7. Ah! I didn't realise that you only got the zero pixel values after calibrating. If the raw subs are not indicating 0 values then the offset value doesn't seem to be a problem and the default value is probably fine. 🙂 At least you got some info on offset if nothing else. 😄 Darks are usually used with CMOS cameras as they tended to suffer from 'amp-glow' which increased with exposure so darks were necessary to counteract this. Newer CMOS cameras have greatly reduced or even elimated amp-glow so darks are not as important. I've found with my 6200 that there is no amp glow visible even with a heavy stretch after 600s exposure although there are a number of clumps of pixels that seem to end up with ADU values around 10 ADU above the background level after the long dark exposure, so I continue using darks for that reason, although checking real images with and without dark calibration these clumps are not really apparant in the actual image. There's an argument therefore for using bias instead of darks for the latest CMOS cameras, just as was done with CCD cameras. Some CMOS cameras do apply different internal processing for bias when using short exposures, less than 1 second or so, compared to long exposures so short bias didn't calibrate out properly, so a 1 or 2 second dark could be used as a bias in this case I suppose. With my 6200 I could find no difference in true bias or 2 second darks so I use bias to calibrate flats and use short exposure flats with no apparant problems. Hot pixels left in your bias (or darks) would likely give a 0 pixel in your calibrated subs if subtracted in their entirety, though I'm not sure at how hot pixels are dealt with in bias/darks calibration procedure. I would hope the bad pixel map is applied to the darks/bias before calibrating. If so a bad pixel map may avoid your 0 pixels after calibrating. It's quite normal not to see much detail in a single CMOS sub with the lower exposure times, but as the read noise is significantly lower now compared to CCD or older CMOS cameras, once the read noise is insignificant compared to sky background noise, it makes no difference as far as noise is concerned in taking many short subs over fewer longer subs. Short subs are less critical of tracking/alignment issues and will have less star bloat. 🙂 Are your raw subs exhibiting the 5000 ADU average pixel level, or just after calibrating? At a bortle 5 sky that seems pretty high for just a 3 min exposure. Maybe someone else in a similar sky can comment for comparison. You would reach the 370 ADU I calculated to render read noise insignificant in 10 seconds or so, so I'm a bit confused. 🤔 Whats the average pixel value of your bias frames or dark frames? My ASI071 dark frames have an average ADU of 284, and it takes 4 mins of exposure to increase the average ADU up to around 526, which is the calculated optimum sky background ADU for that gain/offset setting. As a test you can take say 10 three minute exposures, and then 30 one minute exposures, and then stack the two results to compare how they look. I expect the detail will be the same in both, the 1 min exposure stack may even be sharper if you have some tracking/alignment issues, and the clipped stars should be smaller. 🙂 Alan
  8. For the above recommended 370 sky background ADU I assumed that 1 increase in offset gives one increase in ADU at the cameras bit resolution. To check if this is correct you can take an image (bias or dark, it doesn't matter) and look at the average ADU value (median average is better than mean average, but they should be very close anyway). Then take another image after increasing the offset by 10. If the average has incread by 40 then my ADU assumption is correct and the sky background ADU is right. 🙂 Converting from the cameras internal 14 bit to 16 bit, ( as the image statistics are usually displayed) you have to multiply the ADU value by 4 so 10 x 4 = 40. Alan
  9. No problem. 😀 Your average pixel values seem fairly high in both cases so I would go for a lower exposure. The average pixel value is pretty close to the level of the sky background and once the noise level from the sky background is significantly higher than the camera read noise, then the sky background is the dominant noise source. Further exposure gives little benefit and just reduces the dynamic range of your images as the left edge peak of the histogram approaches the right edge with increased exposure. Looking at the read noise / gain etc. graphs for the 533 I assume you're using gain 100 which is unity gain, and where the HCG (high conversion gain) mode kicks in, giving lower read noise and increased dynamic range. If you look through this thread from last week you can see the details on calculating recommended average ADU values to just swamp the read noise by the sky background noise. For the 533 at unity gain 100 and offset 70 (standard I believe) the optimal sky background ADU works out at just 370 ADU to make the camera read noise insignificant. Once the read noise is insignicant at say, 3 mins exposure there is no difference in stacking 10 subs at 3 min or 2 subs at 15 min so you may as well take 10 subs at 3 mins. Less chance of eggy stars due to alignment/tracking errors, or plane/satellite trails etc. and more subs means plane satellite trails are removed when using sigma stacking. For my 071 at unity gain my optimal sky background ADU is 526 ADU. It doesn't have a HCG mode so read noise is higher. I reach this ADU level (pretty much the same as the average pixel value as detailed in the above link) in 3 mins exposure, and that's the exposure I use all the time, (at unity gain). I'm in a bortle 3 zone which is pretty dark. For you to get average pixel values of around 5000 at three minutes (assuming you're using nominal unity gain) implies you have significant light pollution where you are and it's better to take more exposures of a shorter duration to give your images a higher dynamic range and less clipping of stars. An exposure of 60 seconds would likely be better to use or even less if the average ADU is still well over 370. To see whether the offset is too low or it's just a result of a column of added black pixels just try an exposure at maximum offset and see if the minimum pixel value is greater than 0. If it is then you can if you wish set the offset at a value where the min value is just above zero. If it stays at 0 then you probably have a fixed black column and the default offset, is most likely fine. Hope that helps Craig and I haven't confused you further. 😄 Alan
  10. As long as all your XLR connectors are wired with +12V and 0V on the same pins you won't have any trouble. It's just if you swapped equipment with someone else who had wired them differently that problems can occur. kirkster501 in the first post used a 'non-standard' wiring convention but it doesn't matter as all the XLR connectors were wired the same. If you're going to install new XLR connectors from scratch then it's best to follow XLR convention and make pin 1 Ground or 0V (or the battery negative if the whole thing is floating). It's best to have the negative power rail connected to ground near the supply especially if you are using cheaper switch mode power supplies which only have 2 pin mains connectors without earth. If you want to follow the 'professional' code you would use 4 pole XLR connectors and then connect the +12V to pin 4, leaving pins 2 and 3 unconnected. You can instead use 3 pole XLR connectors as many here do, and then wire 0V to pin 1, as standard and +12V to the furthest away pin which is pin 2. Note that 3 pole and 4 pole XLRs have the pin numbers in a different order. In my reply to kirkster 501 I said he had used pins 1 and 3 on his XLRs but he had in fact used pins 1 and 2. 😉 Hope that helps wookie. 😀 Alan
  11. The 533 pixel size is 3.76um compared to the 5.19um pixels of the 1100d so the 1100d pixels will collect twice the number of photons as the 533 pixels in the same exposure time so the average ADU would be twice for the 1100d on pixel size alone. As the 1100d is uncooled the dark current will also be significantly higher at 3 min exposure giving a higher average ADU for the 1100d darks, which are added on to the pixel value. The relative gain settings between the cameras will also give a difference in average ADU value. If the 1100d at your ISO setting has effectively twice the gain (in electrons /ADU) than that used on the 533 then the average ADU will be doubled too. I found that the ZWO standard offset value is too low on the 1600 and you do get black clipping. The ASI071 and 6200 standard offsets though seem to be OK in that respect. Some cameras have a column of 'black' pixels down one side presumably when the sensor perhaps has an odd number of pixels in one direction. I know one of my cameras does but I can't rember which one. If it's set to black after the A-D converter then it will always be black and you have to ignore it. Or clip the column off in processing to see the true black value. As long as the actual image pixels are above 0 ADU your're OK. Alan
  12. Yes a single black pixel would mean the min value can be reported as 0. It's also an indication that the offset value set in the camera is too low as especially at the black end of the histogram where all the stretching takes place you don't want values from the camera to be outside the range of the A-D converter and so be reported as zero. I've found with ASI cameras I've used, the minimum is often clipped to 1 ADU (at the camera resolution) rather than 0 just to avoid the dreaded zero value which will always give a black pixel no matter how much it's stretched. The 533 is a 14 bit camera so ADU steps will increase in steps of 4 when viewed as a 16 bit histogram. Normally taking a bias frame and examing the histogram, (FitsLiberator gives a good histogram for viewing the black end of the histogram) will show the bias noise bump and the ADU values. As some CMOS cameras can apply different internal processing between short and long exposures, to be on the safe side take darks of around 1 second or so and examine the histogram. Increase the offset until you're well away from the zero minimum value, or 4 minimum value if it clips to 1 ADU at 14 bit. I'd suggest a min value of 12 or 16 ADU at least. This allows image calibration to be carried out with more accuracy as the noise spectrum is better represented, and avoids dark spots in your images. 🙂 Alan
  13. It's only subscription based to receive future updates. The version you currently have will still work forever, even if you don't take out a subscription. Since having a dual rig setup I thought that dither sync would be a necessity to avoid losing too many images so was disappointed when SGP still didn't offer it. However for me, in reality it makes little difference. I have a mono camera taking L and NB and a OSC camera for RGB. I have the mono controlling dither and so lose a quarter to a third of the colour images. You still end up with more colour images than you need especially if your just using it for star colour. For just an LRGB image I may have the colour camera controlling dither, and lose a 60s L image every so often. Not a big issue I've found. 🙂 I tried NINA but found the layout was very cluttered and it didn't do a feature I liked with SGP. I prefer the SGP layout where everything necessary is available on a relatively small screen area. I first started out using APT and have kept up the yearly payments as it costs very little. 🙂 SGP's framing wizard was the clincher at the time for swapping to SGP I tried Ekos KStars, installing Ubuntu on the mount's mini PC, and it all worked fine controlled from my main PC, apart from refusing to recognise the Atik Camera or filter wheel, After a day or so I gave up and went back to SGP. 🙂 Maybe I've been lucky but I haven't had issues with crashes or lock-ups. Alan
  14. I agree Ken, that the FSQ has a top reputation and should be expected to be 'perfect' for that price. I was referring to several posts on the forum from people that were disappointed with the edge performance on their FSQs and ended up sending them off for realignment, to France I believe, and that they came back much improved. That was my reference to being lucky, as to whether you get a perfectly aligned one 'off the shelf'. Hopefully, someone who also uses a similar FSQ with a full frame sensor can comment as to whether what you are experiencing could be improved or not. Alan
  15. I suppose if you're imaging near vertical then the heat from the camera cooler is rising past the scope but unless it's running full blast I doubt it has much affect. 🙂. Having an anti-dew heater on the sensor protect window does release a bit of heat into the tube but it gets blocked by the filters and field flattener etc. so again I wouldn't think it's significant. My ASI6200 glass heater consumes about 300mA so that's 3.6W of heat being released. Once you've got an auto-focuser you'll soon realise how handy it is, as you just focus on your target rather than a bright star, and it will run auto-focus again automatically if the outside temperature changes by a preset amount. Alan
  16. It doesn't really matter as camera cooling doesn't tend to make the sensor move backwards or forwards and so the focus won't change. 🙂 However, if the outside temperature cools significantly between focusing and imaging, the focus of the whole imaging train may well change, so having autofocus, I tend to leave focusing as the last item on the list before starting imaging. Alan
  17. The individual images have to be aligned by the stacking software. Unless you have perfect polar alignment, a perfect tracking mount, or perfect guiding, and take all your subs in one session on one side of the meridian, you'll end up with misaligned images. 🙂 A slight polar alignment error and taking a series of short unguided subs gives the 'walking noise' effect which is a result of a form of dither but in one direction only so the sensor fixed pattern noise shifts by a small but consistant amount between subs. Making these dither movements between subs in a random direction and distance means the fixed pattern noise is 'randomized' and so can be removed by stacking enough subs. A meridian flip is an extreme example of a dither, as well as taking subs over different sessions as centreing the target at the start of a new session is the same as a large dither of the amount of your centreing precision. Polar alignment errors also means each image will be slightly rotated from the others, so good stacking software will rotate each image by a sub pixel amount as well as shift it in x and y to best align the images. This of course means the edges of the stacked images have to be cropped off and large dither movements generally means more cropping but the benefit of dithering outweighs this. Alan
  18. Dithering is done to remove or reduce defects in the camera, primarily fixed pattern 'noise'. It can also remove hot/cold pixels though other methods can help with hot/cold pixels like darks and/or bad pixel maps. These defects are in the same position on each frame and are not related to what target the camera is pointing at, or its size, so the image scale has no bearing on how much to dither. 🙂 The most obvious fixed pattern noise in DSLRs is the colour mottling effect where clumps of adjacent pixels tend to have a distinct colour cast. These seem to extend up to a maximum of around 12 pixels wide which is why 12 pixels is chosen as the dither amount for DSLRs. Cooled OSC CMOS cameras seem to have a similar mottling effect but it's less pronounced, probably due to being cooled. Other fixed pattern noise effects are based on slight sensitivity changes between pixels but these don't seem to appear in such large clumps as the colour mottling so dithering by just one or two pixels can usually be sufficient to combat them. So mono cameras are usually fine with around 2 pixel dithers. I've found using 'random' dither in PHD2 seems to prefer a specific direction so the final stacked image needs cropping quite a bit on just 2 adjacent sides. Using circular dither seems to lessen the amound of cropping needed to around 6 pixels a side. 🙂 Alan
  19. Hi Adam, I assume you're using the Ascom driver for the 1600 as your filenaming didn't know what it was. It's a limitation of Ascom that offset is not a parameter that is available to the capture program using the driver. If you use the native Zwo driver then the offset parameter is available to the capture program to be read and written to. If using the Ascom driver then it can only be set in the driver control panel itself. The capture program will give you the option to access the Ascom driver and set the offset on the camera, but the capture program has no way of knowing what you set it to. To view the Ascom offset value you have to click the 'Advanced' box on the driver shown here When it first came out they was much debate on what offset value is optimum to use for different gain settings, but it became such a pain having to alter it all the time, and the fact that altering the offset has an insignificant effect on dynamic range, it's not worth worrying about and just set it to a high enough value where it works at all gain settings without black clipping data. In later driver updates Zwo set it to be 50 and hid it behind the Advanced box so it wasn't easily changeable by accident. I have mine set to 56 above but have now had to increase it to 64 (the same as vlaiv has his 1600 set to) as I was still getting slight black clipping on dark flats of about 1 sec duration. You may be wondering why seemingly adding fixed ADU values using offset works and what it actually does. If it just added a fixed number of digital ADU units it wouldn't really do anything useful. But offset and gain work in the analog domain and not the digital. The analog voltage signal value read from each pixel is processed by an amplifier before it does to the A-D converter to create the digital ADU value. You want the min and max signal voltages read, to lie between the A-D converter reference analog signal voltage limits which correspond to ADU values of 0 and 4095 (for a 12 bit camera like the 1600). The A-D converter may be designed to accurately convert an analog input signal between say 0.5 and 1.5 volts for example. If the A-D converter is powered just from say 5V, you don't want the input signal too close to the 0V power rail as things may get non-linear so a value above 0 volts is taken as the minimum value into the converter. You can now see what the offset value does. It adds a fixed DC voltage to the analog input signal such that the lowest signal value input to the A-D converter is say 0.5V, using the example above. A covered sensor pixel may give an output between plus or minus a few microvolts, depending on the noise present. You want all those values to be correctly converted so add a fixed positive DC value (corresponding to the offset) to all the pixel readings so they will always be within the range of the converter. The gain just amplifies the signal so more digital units can be used to represent the smaller analogue value, so getting a higher digital resolution. Of course, if the gain is too high you get excess white clipping and loss of signal information. Bright stars will always end up getting white clipped so gain setting is a compromise. Similarly, too low an offset will give black clipping and loss of signal or noise information. Back to your post, 😁 your ADU values are a bit higher than mine. You have to plot a curve to fit the graphs so values between the plotted points are a bit lower that those shown by the straight lines. At zero offset and camera gain 75, I got 2.00 read noise and 2.15 gain giving ADU 298. At camera gain 200 I got 1.45 read noise and 0.50 gain giving 673 ADU. Your adding of 400 to each value for offset 25 is correct. As it's most likely you are using default offset 50, your sky limit ADU values to aim for become 1098 ADU at gain 75 and 1473 ADU at gain 200. Hope that helps Adam. 🙂 Alan
  20. It's best to wait an hour or so after astro dark begins before doing the tests to get a more realistic sky background value as the sky gets darker for a while after astro dark 'officially' begins. 🙂 If you tell me what camera you're using Adam, and the gain and offset values you've used, I can calculate the sky background ADUs for you if you don't fancy doing the maths. 😁 Same for you Edward, if you want to use a gain other than 120, but that looks a good value to use, at least for starting out. For narrowband there is some merit in using higher gain, twice unity is often used though I tend to use unity gain all the time. Just being lazy to avoid having to have multiple sets of darks. 😁 Alan
  21. Look at the image statistics, (I believe most image capture programs have this option) and the median ADU value is the sky background. Also, I don't know if APT has it but normally moving the mouse over the preview image gives a readout of the ADU level under the cursor. Move the cursor around a piece of sky background and get an average value of it. It's best to use an area of sky without a lot of nebula for testing, as they can possibly affect the median value, which doesn't then reflect the true sky background. Of course the exposure to reach the sky background depends on the filter used. L will be shortest and R, G and B around 3 times longer, assuming you use the same gain setting. For R, G and B it's best to end up chosing the same exposure for each, even though that may not be optimum for some, just to avoid multiple dark exposures to match. If you do use a different gain, you need to recalculate the sky background ADU for that gain setting. For OSC the mean value may be better to use rather than the median depending on whether it is debayered on preview. They should be fairly close anyway if you chose an area away from the milky way to do the test. 🙂 The swamping sky background value is only really useful for LRGB or OSC imaging. For NB you will unlikely be able to expose long enough to reach the ADU limit anyway unless you expose for an hour or more or you're at a very light polluted site. If you use much higher gain for NB you may well reach it in a shorter time but remember you have to recalculate the limiting ADU value if the gain is changed. Alan
  22. It looks like you have an automatic tilt adjuster there. Well worth the price. 😄 Here are the corresponding CCDI plots: Tilt is almost perfect. None in Y plane. Very slight in X, shown by the left edge being slightly higher than the right. It's reduced from 1.1" to 0.2". The min to max in FWHM has gone from 2.91 - 4.21 to 2.47 - 3.20 which is significant and best focus is now in the centre. The aspect has improved too although the colours may not suggest that. It seems to always plot min to max as black to pink. Aspect has gone from 6 - 76 to a much closer 8 - 37. By coincidence, like my FLT-98, stars on the far left tend to be radial while those on the far right tend to be tangential with a hint of coma. Mine are a bit worse than yours in some respects, particularly the coma. I found altering FF spacing and tilt could make them worse but they always had the same type of distortion. It looks like with a sensor this size you maybe need to spend a lot more on the scope to improve it unless you're lucky. 🙂 The 3D plot just corresponds to star FWHM values and worse values plot higher in the Z plane. This indicates tilt if not focused in the centre like the previous image. You aim for the X and Y FWHM 'curvature' to be symmetrical about the centre which yours pretty much now are. 🙂 Curvature 3D Aspect Alan
  23. Hi Adam, There were a mass of topics on this on the CN forums after the ASI1600 came out. It applies to all cameras, but especially CMOS cameras with variable gain and offset. This one I think got me started. The replies by Jon Rista, (who seems to be their version of vlaiv 😄) from post 3 gives the theory. Also this post and this one add further discussion and info if you have a couple of spare evenings. 😀 A read noise swamping background ADU is calculated where the background ADU is 10*RN^2. Others use 5*RN^2 or just 20*RN but the first one is most often used. You need to ensure that consistant units are used in the calculations. The gain values used in the calculations are those read from the GAIN in electrons/ADU from the graphs shown above. If you chose the unity gain value of 117 (in 0.1db units), the value you enter on the camera driver, always a good starting point, the gain in e-/ADU gain is 1.0 The read noise at this gain value from the graphs is about 5.7 e- The ASI294 HSC mode starts at gain 120 which should theoretically give better noise results so I chose gain 120 which is about 0.9 e-/ADU. Slightly higher than unity gain. The read noise at this value is only about 1.8 e- 10*RN^2 = 10 * 1.8 * 1.8 = 32.4 e-. To convert to ADU divide it by the gain in e-/ADU. So 32.4/0.9 = 36 ADU The offset value (default 30 for the ASI294 I believe) effectively just adds a fixed offset ADU value to every pixel so you need to add this to the above figure so it is comparable to the sky background ADU values. Therefore add 30 ADU to the 10*RN^2 ADU value, 36 + 30 = 66 ADU. As this is a 14 bit camera the ADU values quoted above are 14 bit values. To display as 16 bit values, as used on most capture programs, you need to convert it to 16 bit by adding 2 zero value LSB (least significant bits). This effectively multiplies the 14 bit ADU values by 4 (ie. 2^2) So a 14 bit sky background value of 66 ADU becomes a 16 bit value of 66 * 4 = 264 ADU Whoops! it's not 280. I must have read the read noise and gain graphs a bit differently when I came up with 280. I read them rather quickly when I posted yesterday. 😉 If you want to get the sky background ADU for different gain and or offset values, just read the relevant values from the graphs and do the maths. 🙂 For the offset I assumed that an increase of 1 offset increase the ADU by 1. This is true for the ASI1600 (12 bit) and ASI071 (14 bit) cameras I have. However my ASI6200 (16 bit) increases the ADU by 10 for every 1 increase in offset. This threw me when I first calculated the sky background values assuming ADU/offset was 1 to 1. You can easily check what it is by reading the peak value of the noise in a bias frame, and increasing the offset by say 10, and taking another bias frame. If the noise peak increases by 10 (at native bit depth) it's 1 to 1. Remember, If displaying at 16 bit then it will show an increase of 40 (for a 14 bit camera) or 160 for a 12 bit camera. Anyway, that's the theory Adam. I hope it helped and hasn't addled your brain. 😁 Alan
  24. The Magic Lantern 1920 x 720 is a one to one pixel ratio format, as the 6D horiz res of 5472 divided by 1920 is 2.85 which is the image scale gain quoted my the report. If the 640 x 480 crop mode is available as standard that would be fine but there doesn't seem to be clear evidence that it is. 🤔 The Magic Lantern mode is 10 bit to enable raw recording at higher resolutions at presumably standard frame rates. Whether that means a higher framerate than standard video at 1920 x 720 is available, I don't know. Alan
  25. Hi John, There are plenty of tutorials on using/installing Magic Lantern on the site and youtube. You install it on the camera's SD card and when powering up the camera it looks to see if any boot software is on the SD card. If there is (in this case Magic Lantern) it will load it and run this software. If it doesn't find any boot software, (as when using a normal SD card) it then proceeds to load the normal internal Canon software. Looking at the PIPP downloads screen, since version 2.5.6 (May 2016) it supports Magic Lantern .mlv files. 🙂 On the video above it generated .raw files from Magic Lantern. It seems the latest versions generate a more universal raw format with a .mlv extension which is the one PIPP uses. Alan
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.