Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

IanL

Members
  • Posts

    1,357
  • Joined

  • Last visited

Everything posted by IanL

  1. The mesh effect is a pretty standard part of the diffuser on these panels. Really the only test that matters is to take some flats and check them before you start tearing it down.
  2. OK I was doing some flats yesterday which weren't satisfactory. There was a light band across width of the frame (about 5% of the total height of the frame), followed by a narrower dark band and then a lighter band shading in to a more 'normal' looking flat. This was across R, G, B and L filters. I also thought it might have been daylight leaking in so made a cardboard shield to block/reduce any light coming in from the narrow gap around parts of the observatory wall/roof meeting. Made no difference. I had set the flat panel to 255 (maximum) brightness and had calibrated the exposure time to produce 30,000ADU using SGPro's flat calibration wizard, giving me exposure times of 0.01 - 0.03 seconds depending on filter. This had worked pretty well for previous images, so not sure what has produced this change. I suspected banding due to either PWM flicker (in theory the circuit should have the LED on 100% of the duty cycle at that setting but who knows?), or possibly some kind of camera/AC mains frequency issue. The dark stripe on the red filter was in a slightly different position than the L, G and B - and the red has about 3x the exposure of the L, supporting my theory. I set the flat panel to 10 brightness (very dim) and re-ran the flats calibration wizard, producing exposure times of 0.13 to 0.51 seconds depending on filter. This is still enough to avoid the 1600MM (non-pro) long exposure gradient issue, and I was hoping the longer exposure would average out any flickering or similar. Just captured some new flats and they seem to be free of the artefacts found with the much shorter exposures. I don't know if it will help you, but I'd definitely recommend trying similar by either dimming the EL panel or using some kind of diffuser/filter over it.
  3. Other light source adding to the flat maybe - daylight or a street light?
  4. You may have done this already, but the first thing I'd try taking a flat (call it "A"), taking a second flat (call it "B") then physically rotating the panel 90 degrees and taking another one (call it "C"). Then: - Compare the median ADU values of "A" and "B". They should be pretty much the same give or take a few ADU or tens of ADU. If they're markedly different then you either have a problem with the light source (flickering at a high frequency that is sufficient to create different brightness levels from frame to frame and may appear as a gradient), or perhaps the camera/electronics but seems less likely. - Verify this by subtracting "B" from "A" using PixelMath in PixInsight or something similar. To do this properly you'd probably want to use an expression like : (A + 0.1) - B so that noise doesn't end up clipping loads of pixels to negative values that are then truncated to zero. All being well you should end up with a result frame with a median value of close to 0.1 (using the normalised real range in the statistics tool). If you end up with a value that is closer to zero or well above 0.1 then your flats have different illumination levels (e.g. banding caused by the panel or camera). - Stretch "A" and "C" and examine them visually. If you can see the gradient it should be the same in both, in which case there may not be a problem, just that there is genuinely a gradient in your imaging train that the flat will correct. If the gradient is rotated by 90 degrees between A and C then the source of the problem is the EL panel itself which isn't producing a flat field, or perhaps you haven't got the panel orthogonal and equally spaced between the two setups. - Verify this by subtracting the unstretched versions of "C" from "A" using PixelMath in PixInsight or something similar. Again to do this properly you'd probably want to use an expression like : (A + 0.1) - C so that noise doesn't end up clipping loads of pixels to negative values that are then truncated to zero. All being well you should end up with a result frame with a median value of close to 0.1 (using the normalised real range in the statistics tool). If you end up with a value that is closer to zero or well above 0.1 then again your light source is not flat or it is mechanically shifted with respect to the optical axis between A and C. If it's any comfort, I've always struggled with getting decent flats using any type of device. The best ones have always been taken using strongly overcast daylight sky with a diffuser on the end of the OTA, but is isn't convenient at all. This LED panel is the best I've managed with a device of any sort, but even so it is not as reliable and 100% clean as using daylight.
  5. I can only speak for the earlier versions of the 1600, i.e. the newer "Pro" version has onboard RAM to overcome these issues as it speeds up the readout greatly. With the older versions of the 1600 there is a change in readout modes for exposures longer than 2 seconds (if using USB3) or longer than 5 seconds (if using USB2). Apparently this is to reduce the noise in longer exposure images but ZWO never really explained how or why. Shorter exposures below these limits effectively stop the exposure at read-out, but longer exposures read out progressively which means that pixels at the top of the sensor are exposed longer than those at the bottom. For lights with relatively little signal, this does not matter much, but for flats where there is a bright light source you can end up with a gradient of maybe 5% from top to bottom of the flat if the exposure is longer than the cut-off. The solution is to either set the brightness of your flat light source and/or higher gain to achieve a short exposure less than the cut-off (no gradient) or do the opposite and use a dim source / lower gain to achieve a much longer exposure. You probably wouldn't be able to dim the panel enough for LRGB filters, but possibly for narrowband., so you might need to use some material over the panel to reduce output, e.g. sheets of white paper or ND film. The idea is that once the exposure is long enough, so the ratio of flat exposure time to unwanted exposure time during progressive read-out is sufficient to reduce the gradient to a negligible percentage of the median brightness.
  6. It's walking noise. You need to dither between subs by about 15-20 pixels. (I.e. Re-point the scope slightly for each exposure). If you're guiding then it's easy as there are settings in the software to do it automatically. If you're just using a tracking mount and no guiding, then you're going to have to do it manually - maybe re-point every two subs or so - you just need to nudge the RA and Dec axes a small amount in random directions each time. The problem is that the camera's pixels all have slightly different responses to light, plus there are other sources of noise. Over the course of the series of exposures your tracking is not perfect, so the camera's field of view is gradually moving across the sky along the axis from top left to bottom right of the image (or vice-versa). When you stack the noise pattern builds up to the streaks you see. When you dither, the noise pattern moves around randomly and so doesn't produce this objectionable pattern. Whilst calibrating (darks or defect map) can help, as can noise reduction, you'll never eliminate the pattern unless you dither.
  7. Pointing along the line to the centre (outwards in corner/edges) = too close. Pointing 90 degrees (crossing) the line to the centre = too far. \ | / - x - = Too Close / | \ / - - - \ | x | = Too Far \ - - - /
  8. a) There was a small error (which I corrected in a follow up post), the guiding calculation should have read "(5.2µm / 400mm) x 206.3 = 2.67 arcseconds per pixel", i.e. result was correct but mistyped the camera pixel size. Not relevant to the question but just thought I'd clear it up. b) "1.5 times" was perhaps a poor choice of words. The imaging resolution is 1.5 times better than the guiding resolution, i.e. there are 1.5 times more arcseconds of sky on each guider pixel than on each imaging pixel. So pedantically you are correct, but hopefully the point is clear that your guiding resolution needs to be reasonably proportionate to your imaging resolution. Your guider could have a better resolution than your imager, e.g. piggybacking a widefield camera/lens on a long focal length SCT which is guiding would be OK. Going the other way the four times statement is a rule of thumb, your mileage may vary at worse guider:imager scale ratios but if I was buying equipment it wouldn't make sense to go beyond that rule of thumb.
  9. Reflection of an external light source that changes as the scope tracks?
  10. If you're looking to compare noise, e.g. using patches of the background sky, use MAD (Median Absolute Deviation). It is a much more robust measure than StdDev. For example removing outliers (e.g. hot pixels) will have a significant effect on StdDev whereas it won't on MAD and thus allows you to compare the underlying noise distribution of the before and after images.
  11. The manual is here: https://www.google.com/url?sa=t&source=web&rct=j&url=https://www.baader-planetarium.com/en/downloads/dl/file/id/175/product/2856/instruction_manual_for_all_baader_diamond_steeltrack_bds.pdf&ved=2ahUKEwjcipWQhYbqAhVKTcAKHZDcAEoQFjAAegQIAxAC&usg=AOvVaw1GJ5_6SdkVmNgNQpbo9EfL Read page 10. The pressure screw is set for 6Kg (imaging) and may need slackening slightly for visual. It is the single hex head in the centre of the drive base. Don't mess with the other screws.
  12. I don't know about the ASIAir but I assume it uses the same process. Basically you take an image, rotate the mount 30 (polemaster) or 90 (sharpcap) degrees in RA and take a second image. The software identifies the corresponding stars in both images and uses geometry to work out where the centre of roation is (i.e. where the RA axis is pointing on the sky). Using plate solving it works out where the NCP is in the image and directs you to adjust the mount until the NCP and centre of rotation match.
  13. Again, yes a polarscope can be misaligned and will cause issues. A Polemaster does NOT depend on being well aligned though. The software determines the centre of rotation in the series of images, so any misalignment is automatically taken to into account. (Same for Sharpcap). That is why they are superior at alignment. Yes you are correct that if you have a perfect polar alignment, an accurate home position and zero cone error the main scope should be centered on the NCP. In fact the easiest way to check your cone error is to take an image of the NCP (once polar aligned and homed). Plate solve it and you can see how much cone error you have. You can get to (or close to) zero guiding with a high-end mount, e.g. a direct drive mount with no gears. For normal mounts you still need to guide as the gear train will have backlash and periodic error in the in order of arcminutes, and no amount of polar alignment will overcome that for exposures more than a few tens of seconds.
  14. You're confusing three different concepts: 1. Polar alignment refers to the alignment of the RA axis of the mount so that it is parallel to the Earth's axis of rotation. Neither the mount's home position nor the scope's cone error have any bearing on this. - You can polar align using the mount's polar scope. This is subject to a number of cumulative sources of error; how well the polar scope and reticule are aligned to the mount's RA axis, and your ability to precisely align Polaris on the reticule over several steps to find the appropriate hour angle. Usually this can be done well enough for visual, but unlikely you'll get within more than a few arcminutes of the pole unless very experiences. - You can't really use the main scope for this job, as it is subject to cone error, and you do not need to centre on Polaris, but position Polaris in the field of view such that the North Celestial Pole is centred (assuming no cone error). - Much easier is to use a Polemaster or something like Sharpcap. These don't rely on any precise alignment of anything. They take an image of the sky through the camera (dedicated or through your main scope respectively). You then rotate the mount approximately 90 degrees in RA and a second image is taken. By plate solving both images, the software can determine the centre of rotation in the image, and therefore exactly where the mount's RA axis is pointing (not the camera or scope, the axis). You then adjust the mount until the axis is pointing at the correct position in the sky, very precisely determined by repeated imaging and plate solving. You can achieve a few arc seconds of accuracy very easily, the limit being the mechanical stability of the mount and the coarseness of the alt-az adjuster threads. 2. The home position is only important for mounts that don't have absolute encoders (such as many Skywatcher mounts). The mount controller (handset, EQMod, whatever) only knows where the mount is pointing by counting the number of steps the RA and Dec stepper motors have been told to move. This is unlike an absolute encoder which will usually start from a defined index position (found automatically using a sensor), with the encoders counting the rotations of a given gear axis electronically and reporting back to the controller the encoder position, which the controller converts in to RA and Dec. The problem with stepper counter control is that you have to know where you are starting from. There is no absolute position as such, so the controller can only determine that a given axis has moved X steps (and thus degrees) clockwise or anticlockwise from wherever the mount was pointing when it was switched on. Thus the need for a home position, so that the mount can assume a given start location and go from there. If your home position is off a bit, you'll be correspondingly far off after your first slew to a target. You can of course correct this by moving the mount on to target and syncing the handset or EQMod which updates its pointing model to eliminate this error slightly. 3. Cone error also induces slews to be off target since the handset will of course assume that the scope is pointing at the Celestial Pole (not Polaris) when in the home position. Again the pointing model can and will compensate for this as you do more corrections and syncs, but it is desirable to get a good home position and tune out any cone error mechanically to make life less difficult for the visual observer. For an imager, just use plate solving once you are polar aligned; there are numerous free plate solvers and most capture software supports one or more of them, and it literally saves hours when the software finds the target unaided. The challenge with having a set of home position marks made by the manufacturer is perhaps that you can rotate the dovetail clamp by 90 degrees to accommodate a side-by-side bar? I don't see any reason why they couldn't put fixed index marks at 90 degree intervals around both halves of the RA and Dec Axes to be honest, it would save 10 minutes with stickers and fine-tipped pens, but I suppose there would be additional work to adjust everything so they were properly aligned and represented the actual home position.
  15. This month's public lecture will be given by Ian Lauwerys on the subject of radar meteor detecting. We will look at the practicalities of building your own back-garden meteor detector, delve in to the murky world of top-secret spy installations and cover some of the science behind meteor detecting. The talk will be presented live via our YouTube channel (details below) and will also be available for replay later. The downside if that you'll have to supply your own tea and biscuits, but on the plus side we'll be able to give a demonstration of a meteor detector in action. The stream will be available here at 8:00PM (UK time) on Wednesday 15th April 2020:
  16. I've finally managed to get round to building a remote control flat panel (suitable for use in SGPro and any other software that supports Alnitak Panels). Total cost was less than £25 - I used one of those LED tracing panels that you can find all over Amazon and eBay. Surprisingly the first one I bought turned out to be a real winner - A4 sized, USB power, 60 LEDs and dimmable for £15. The key difference seems to be that many of these tracing panels have a grid of LEDs close behind a diffuser panel which creates dark and light spots, or the are edge-illuminated from one side only. The one I got has two strips of 30 white LEDs along the long edges - internally it has a reflecting layer, a clear acrylic sheet which the LEDs shine in to and then a diffusing layer sandwiched on top. The light seems very uniform and I didn't need to add any futher diffusing or similar elements to make it usable. I hacked out the original controller and hooked everything up to a 5V Arduino Nano clone (£5) and a MOSFET (£2) and used some Alnitak emulation code written by one of the SGPro developers. It is now mounted on my observatory wall and I just have to park the scope and SGPro can take care of the flats for me for a fraction of the cost of the cheapest automated flat panel. At full brightness I can do flats for LRGB filters in a couple of hundredths of a second, and about half a second for narrowband filters. This is ideal for my ASI 1600MM-C (not Pro), since longer exposures use a readout mode that creates gradients in your flats. Used it this week and results are as good as using my DIY manual diffuser and cloudy sky method, with less risk of floating away in a garden under 2 inches of water! Anyway, full write up with shopping list, photos and diagrams is available here: https://www.blackwaterskies.co.uk/2020/03/cheap-diy-remote-controlled-flat-panel/
  17. It will be a lot smaller than you are probably expecting. To give you a rough idea, below are are couple of images of Venus and the Moon that I took a bit over a week ago - both with the same scope and camera, so effectively the same "magnification". Venus should look like a tiny 'half Moon' at the moment when you have it in focus, but as you can see it is way smaller than the actual Moon. It will become more like a crescent Moon over the coming weeks.
  18. Don't know to be honest. All my ISS detections have been main beam but obviously much bigger target
  19. Good luck finding a corresponding motor pulley. I could not find anything with the same belt profile, correct number of teeth and pre-bored to an appropriate diameter for available motor shafts. In the end I put my own pulley in place of the other knob. Very frustrating and believe me I really looked hard. Almost like Baader wanted you to use their own unit!
  20. Caught some Starlink echoes on the GRAVES meteor detector from here in Essex. They were coming across at approximately 10 second intervals for several minutes. Two were strong enough to trigger the meteor capture routine and a couple more coincided with a real meteor, but manually captured a bunch more that were below the threshold.
  21. LBN552 appears as a small orange patch to the lower left of this image, with dark dust and reflection nebulae from the borders of Cepheus, Ursa Minor and Draco. This was a bit of a rescue job as had a bunch of problems during initial acquisition - mount lost power at one point, then auto-focus failed during the second half of the night (still dialling it in with the new scope). Polar alignment is way off as I think the pier has shifted a bit since I last used it, so lots of field rotation caused the outer 10% to be unusable once stacked. I need to make a better flats box too so this is just a crop from the centre of the image complete with dust bunnies. needs four or five times the amount of data really so the overall effect is a bit waxy at the moment, but still it is a pleasing start. Full version here: https://www.blackwaterskies.co.uk/2020/01/lbn552/ Acquisition: William Optics GT81, WO Flat 6AIII 0.8x reducer, ZWO ASI1600MM-Cool, Atik EFW2, Astronomik LRGB 1.25″ Mount/Guiding: Orion ST80, QHY 5, PHD2, Sky-Watcher NEQ6, EQMod, Sequence Generator Pro Processing: PixInsight 1.8.8 Dates: Dec. 18th 2020 Lights: L 60 x 120s, R 30 x 120s, G 30 x 120s, B 30 x 120s, Unity Gain, -15C Bias: No Darks: 100 Flats: No
  22. Didn't get any response on this, maybe being too specific Anyway for reference if anyone else has the same issue and having had time to do some trial and error I can confirm that the reducer needs to be set to at least the 9.1mn mark on the scale for a GT81, the 7.1mm setting in the second diagram is definitely too close. The extra 2mm outwards adjustment goes from visibly egg-shaped stars in the corner to visually acceptable ones. If anything I might try going a bit further out as measurements using FWHMEccentricity script in PixInsight suggest the stars are still somewhat stretched in the corners. Results with the original setting (7.1mm + 0.33mm to account for 1mm thick filter glass): Results with the 9.43mm setting (9.1mm + 0.33mm to account for 1mm filter glass): The stars are visibly rounder in the corners, measurements of eccentricity suggest a bit more spacing needed as ideally would want to be below 0.45 across the entire field. Measuring using a digital caliper, assuming a 1mm thick filter, you're aiming for approximately 90mm between the rear face of the fixed part of the reducer and the front face of the camera body (assuming standard ASI 6.5mm sensor setback), there are a few models with a different value so do check. You'd need to add a further 0.33mm for each mm of filter thickness or reduce by 0.33mm if using an OSC with no filter :
  23. I can confirm the Revelations do have the Lazy Susan bearing by default. One issue is making sure the base is level - if it is the Az action is smooth and can be a problem with the wind moving it like a sail. If not level then it can stick quite badly. With mine the Alt motion is rather too free (probably needs new hold-down springs as it is rather ancient). I use some magnetic welding weights to adjust the front to back balance depending on where I am point it.
  24. David, basic physics doesn't agree with your findings. The diagram in your linked post is absolutely correct - refraction through the filter glass moves the point at which the rays converge to focus physically further away from the reducer, so you have to add more spacing to compensate. Where you're getting confused is that 'optical distance' assumes that the light travels in a straight path from the back of the reducer to the focus point - once you add a filter or any other optical element after the reducer that assumption is broken as the rays converge less during the path through the filter. It may be that the backfocus specifications for your setup aren't correct/clear in the manufacturer's advice. I'm struggling with the same issue right now with my new WO setup, see post below. Basically WO are quoting two different figures for spacing for the same reducer/scope combination, only one (or neither) of them can be right. I'm pretty sure I need to reduce the spacing but need some clear skies to test it out!
  25. The problem won't be whether you can get a few hours out of it at first, I'd think you will. The issue will be that the battery performance will degrade fairly rapidly with repeated charging/discharging cycles. Car batteries are made to provide a lot of current for a short period of time after which they are recharged whilst still mostly full. Deep-cycle batteries are designed to be used continuously over a much longer period and will withstand repeated heavy discharge/charge cycles.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.