Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

rickwayne

Members
  • Posts

    695
  • Joined

  • Last visited

Everything posted by rickwayne

  1. While darks can be scaled to different exposure times and temperatures, that process is not perfect, and you may as well just build yourself a set of them. With a cooled camera, you can do this at any time; folk who lack those often wind up spending imaging time on in-situ darks. So, you know, yay you! Personally I shoot them in sets of 100 because hey, why not? Once I've built the sequences, which takes a minute or two, the camera and computer will happily shoot for days to complete the library. Once I've averaged master darks out of the individual frames, I can throw the latter away. The complexity of managing all these calibration frames is a strong argument for settling on a set of parameters (gain, bias, temperature, exposure time) that work for you and then just reusing that as much as possible. Again, you may find that your flats work between filters. But dust happens -- I just reshoot every time. I have a sequence in Ekos for that too, so I just have to trigger it while I'm tearing down and packing up (and the 11x14 tracing pad makes a nice little lamp for that process, as I've said). Either flat dark frames or bias frames are required for the flats math to work out correctly. Anecdotes about super-short exposures leading to banding and other problems on certain CMOS cameras lead many to avoid them; imagers I respect say they see no evidence of such problems. I have some small measure of self-respect, I guess, and I sure don't. The advantage to bias frames over flat darks is that the latter must be the same exposure time as your flats; if you are shooting per-filter flats correctly, you're 99% certain to be shooting different exposures for OIII and luminance! One set of bias frames works for all exposures. Simple. Finally, as with all noise-reduction techniques, cooling the camera is a diminishing-returns game. You do you, but cooling to -20 gives a pretty marginal improvement at best over higher temperatures. Here I am flogging Robin Glover's talk yet again, but IMO everybody should give it a squint, and figure out where the bang/buck peaks for them. CCD fellers likely colder than us CMOS types, stipulated.
  2. I recall some complaints about hard-to-calibrate color casts with early runs of the 294 but those seem to have been sorted. Imagers I really respect seem to do quite well with it. I recommend that you figure out your imaging scale and confirm that it sits well with your scope (https://astronomy.tools/calculators/ccd_suitability). Since you've already been using a big-sensor DSLR you don't have to worry too much about vignetting or field flatness, and indeed image scale will probably be fine too. If you have dark skies, one-shot color is pretty hard to beat for operational simplicity. If you are fighting light pollution, LRGB or narrowband offer significant advantages. It's perfectly possible to start doing RGB or narrowband pretty inexpensively by installing one filter at a time into the imaging train, or by using a manual filter wheel. (As if I should talk -- I just laid out US$300 for an 8-position electronic ZWO wheel because I was tired of dinking around with a temperamental 5-position electronic wheel.) I never thought I would like black and white astro imaging but now some shots I've done from terribly light-polluted sites using just a hydrogen-alpha filter occupy pride of place in my gallery.
  3. I know some DSLR astrophotographers who've tried it both ways and find that they're not really getting visible noise reduction from dark frames. Pentax K DSLRs are notorious for this, they just have low noise to begin with. That's another experiment you could try, actually -- how many dark frames does it take to actually improve your images? You can run it with an existing set; say you had 16 dark frames, try it with all of them, 8, 4, 2, 1, and none. Recall that dark frames are intended to subtract the thermal noise signal, so in principle you just need one. Of course there's a noise component to the dark frames themselves so in practice you need to average, just as with light frames. But the tradeoff is integration time versus less noise in the thermal-signal reduction frames; at some point, your overall quality will be better if you spent the in-the-field dark-frame time just on shooting more lights. I live on Easy Street with a cooled astro cam, so I can set a temp, tell it "shoot 100 darks at these combinations of gain, temperature, and time" and walk away. Two days later I have 20 GB of darks to integrate but at the end I have butter-smooth dark frames. Of course since I'm shooting with a 183 I absolutely HAVE to use dark frames, or all my images would display this lovely pattern:
  4. I don't think there's really a single document for this. The best I've found are Jasem's tutorial series on StellarMate, since that uses KStars/Ekos a great deal of it will be apposite. Written info on the various modules: https://stellarmate.com/support/ekos.html. Video tutorials: https://www.youtube.com/channel/UC1x4aSvnUVtBNafANsD_Izg. Once you've been through it enough times it really does become pretty intuitive, but it's not exactly self-explanatory, is it? Using the simulators to "dry-run" things without having to connect and troubleshoot the hardware is a HUGE win. And of course there are several of us here who will be happy to answer questions.
  5. If it still doesn't work, they could try it on a closer terrestrial object too -- I hear of more problems with Newts not having enough inward travel than outward. If they can achieve focus on something close-up but not far away, that's the prob. If they have a DSLR available, they could take the lens off, hold the body up to the focuser, and see if they can achieve focus with that. Quick way to see what you've got without tubes or having to look off to the side at a computer screen.
  6. The 183 has a 1" sensor. The 1600 is 25% more expensive, so probably out of your budget; it has a 4/3 sensor. APS-C and full-frame sensors cost bigly. I am quite happy with my 183MM Pro. It does have pretty noticeable amp glow and those little pixels mean you have to work harder at controlling noise, but it's really quite the nice camera. I specifically wanted a finer image scale with my 336mm imaging train. Since I already had an APS-C DSLR, I also wanted to have a different choice of FOV for smaller objects. IIRC the 183's cooler spec says it will maintain 40 below ambient. That's probably OK for you -- 0℃ is going to knock out the lion's share of thermal noise -- but be aware that at that kind of differential, you will be sucking a lot of power into the cooler, and may very well be fighting fogging issues.
  7. Oh, well, a 1.25" nosepiece <sniffs>... 😁 I originally used that when I set up the OAG but I was having problems with rigidity. So I went all-screw-threads intead.
  8. NOICE. So nice I'll forgive you for not having gone uphill both ways in the snow for several years with nothing but poo images to show for it, like some of us 🙂 If you'd care for a somewhat less simple tweak...see if you can obtain a standalone version of starnet++ and get that running. It's a neural-net processor that yanks the stars from your image so that you can process that gorgeous nebulosity independently. Then, when that's looking quite spectacular, you can blend the stars back in. The idea is that when you stretch an image to map the dim tones onto most of the available tonal range, you inevitably lose saturation in the higher end. So by taking the stars away from that stretching process, you preserve their color, and also avoid enlarging them unduly. No criticism implied of your fine image, but you'll note that there's very little color in the stars. I bet you have plenty in the original data.
  9. You will note that their diagram does not include 20mm of filter wheel between the OAG and the imaging camera, though. Sure, you could put the wheel in front but then you're trying to guide through your filters; since I do a lot of narrowband, I want all those sweet, sweet photons getting to my guide camera.
  10. If you think about it, in order for the guide and imaging sensors to be in focus simultaneously, they have to be at the same distance from the objective. So they're the same distance from any point behind the objective too, including the front of whatever assembly you're moving from scope to scope. As I alluded to before, the fact that the pickoff prism is a few millimeters from the optical axis makes this not EXACTLY equivalent between all focal lengths. If you had a super-duper-short F/L a line drawn from the objective's center to the pickoff prism would be a broader angle, and so the optical path to the prism would be longer than that from the center to the point below the prism on the optical axis. But for telescope focal lengths we're talking pretty acute angles and hence pretty teeny differences. Although, when I do the math, if you took it off a 400mm scope and put it on a 2000mm one, the discrepancy ought to be 280 microns if the pickoff is 15mm from the axis. Hey wait a minute, that's enough to make a noticeable difference in focus! Critical focus for a system in the F/5 range is maybe 10 microns thick. However this is assuming a spherical field of focus. If you've got a field flattener in the optical path, I think all these bets are off and moving the system between scopes will maintain critical focus.
  11. Per using the same rig on multiple optical tube assemblies: If I had more than one, I'd say "sure!". ZWO OAG: That's what I just acquired. The only tricky bit is using spacers to get the two distances close enough that the difference can be taken up by adjusting the guide camera on the OAG stalk. Here, for example, is my rig. The spacer between the OAG and the scope is spurious; it just happened that the only spare adapter I had between 42mm threads (the 1.25" nosepiece) and 48mm threads (the OAG) was a 20mm long spacer. The filter wheel is 20mm thick, the spacers between the wheel and the imaging camera soak up the equivalent distance to the 120MC up top. Theoretically I could eliminate one ring each -- the ones directly attached to each camera -- but when I have the flattener/reducer installed, I need the extra backfocus. (The OAG screws directly onto the back of the FF/R in that case, eliminating the 20mm spacer and the nosepiece attached to it.) Bit of a Rube Goldberg contrivance, to be sure. But it's the hardware I happened to have. There are a pair of small thumbscrews at right angles to the OAG's optical path. One lets you first adjust the prism "stalk" to dip to the correct depth into the main optical path, so it's lit without shadowing the main sensor. Once you've secured that nice and tight, there's another that lets the guide camera slide up and down on the stalk. You set up the scope and focus the imaging sensor on an object -- helps to do it in daylight -- and then use the second thumbscrew to slide the guide cam up and down until it's in sharp focus too. Tighten everything down, and you should be done unless you bash something hard enough to displace it.
  12. As a total n00b to OAGs myself, I found the setup somewhat fiddly, as I expected, but in the end it wasn't hard. Basically you want to make two distances approximately the same: At right angles to the main optical axis, from the pickoff prism of the OAG to the guide camera sensor plane Along the optical axis, from the pickoff's position back to your imaging camera's sensor. (Yes yes, the hypotenuse of that right triangle won't be identical to the leg length. I said approximately. (-: ) You may need to drum up some spacers in order to get one or the other in the correct range. Once you've got it within 10-20mm, you can set the rig up in daylight, focus the imaging camera, and then adjust the OAG in and out until it focuses. Helical focuser would be sweet but it's really not a requirement; in my setup, it's not even possible. The distance to whatever daylit object you're focusing on doesn't matter, it doesn't have to be at infinity. In addition to the above-cited advantages, the OAG gives you a cleaner, lighter rig whose lateral center of gravity won't change as much as the mount tracks in RA.
  13. Shooting from Live View: Unnecessary. Frame and focus with it, then save battery, heat, and (with my DSLR at least) extra mirror flapping. Mirror lockup: Really depends on your particular setup and the length of your exposures. If you've a solid mount and are doing minutes at a time, the excess vibrations from the mirror are going to have very little effect on end-game image quality. Wobbly tripod and 2-second exposures, not so much.
  14. I suspect that the only way to really find out will be to experiment. Which, c'mon, will be fun in itself! I would not expect 240-second exposures off a tracker, certainly not to begin with. 120 seconds with good polar alignment, definitely. I'm sure folks will chime in with their experiences to the contrary but frankly it's a pretty good night for me with my equatorial mount and guiding in both RA and DEC axes if I'm keeping a high percentage of anything longer than 5 minutes. While it will be super-tempting to try out your beautiful new telescope, you might want to just try some widefield work with a camera lens to start with. That will give you practice in setup/teardown, polar alignment, image acquisition, and processing, and probably some early successes to feed the astrophoto beast within. That said, don't try for Astronomy Picture of The Day out of the gate. Give yourself permission to play around. If you shoot ten each of 30 seconds' exposure, 1, 2, and 5 minutes, how many of each are good enough by your standards? Except for the camera's read noise, it's a pretty linear thing -- total integration time is total integration time, whether you have lots of short exposures or a few long ones. If your keeper rate is 25% for 5-minute exposures and 85% for 30-second ones, you might wind up with less overall noise at the short end of the spectrum. If you haven't already picked up a copy of The Deep-Sky Imaging Primer or Making Every Photon Count, I heartily recommend that you do so posthaste. The sort of basic knowledge these books will impart serves as an underpinning for a whole galaxy (sorry) of choices that you make every time you set up and shoot. And welcome. Boy, are you in for some challenges!
  15. +1 on the satellite. You'd expect a meteor's trail to be more persistent, that successive frames would show the trail longer and longer as the ionized air behind it continues to glow. A satellite is a point* light source** that only appears as a trail during a time exposure***. *Yes, not really a point. Pretty close though. **Yes, a satellite is not a source of light, it's reflecting the Sun's light. ***Yes, they're ALL time exposures, but if the object moves less than a pixel's worth during an exposure we'll call that a freeze-frame, eh?
  16. The main scope, if I'm interpreting you correctly, is just serving as a place to bolt your camera on. So you're equivalent to a ballhead mounted on a tracker, which works fine no matter which way you point the camera. And if you're autoguiding through the main scope, it will still work OK. Think of it this way, the autoguiding, if absolutely perfect, will cause the guide star to stay exactly centered in the scope's FOV all night. And a perfectly-tracking mount without autoguiding will do...exactly the same thing. Now, any field rotation due to polar misalignment will certainly be less well-compensated by guiding on a star far away from the camera FOV. But guiding will not in itself induce field rotation.
  17. The histogram advice is good. If must err, think about what you're imaging and optimize for that. Want deeply saturated star colors? Stay away from the right edge of the graph. Want to bring up faint nebulosity? Beware the left. Can get all your data in between? Good on ya!
  18. A-HAHAHAHAHAHHAHHHAHHAAAHAHA HA HA HA HA <wheeze> Oh yeah, that's what we all said too, thousands of dollars or pounds ago.
  19. I have a 120MC and it's a fine little guide camera, but between the noise and the low pixel count, it's awfully hard to get a good-looking shot with it.
  20. WRT the color balance, how are you analyzing the images as they come off the camera? I assume you're using a one-shot color camera of some sort; what do your histograms look like, and what file format are you recording to?
  21. Plate solving gives you godlike powers. I weep when I think of all the cumulative hours I spent raging because I couldn't get the target in view after an hour of trying. And I blush when it stops working for some reason and I discover that instead of a sophisticated astronomer, I'm really just a newb with a lot of expensive toys. 🙂
  22. Consider affixing your hub to the scope, rather than to the mount, and right over the DEC drive if possible. That way you only have one cable in motion during guiding, and its lever arm is minimized. (Two cables if it's a powered hub, of course). I know that the DEC axis isn't slewing continually like the RA is, but you'll get guiding corrections to it and every little bit helps. If you tie all the cables together it's probably pretty similar mechanically, but IMO you may as well go for the optimal cable-drag solution.
  23. I used to go for -20 but then I listened to Dr. Glover. I know I bang on and on about his talks but it's hard to argue with his conclusions. YouTube video of Glover's talk
  24. Bias and darks tend to be "Meh, good enough is good enough". The thing about flats is that they capture position-dependent information* and so if you're not pretty scrupulous about how you shoot them, the stretching process can amplify small errors into visible discontinuities. "Can", I say. YMMV, as Alacant's apparently does. Vignetting and the position of dust bunny shadows anywhere other than right on the sensor are going to vary with focus position as well as camera position. I shoot a set of flats for every filter with my mono camera, because experience shows that the filters, as clean as I try to keep them, are just not identical. One issue to keep in mind with twilight flats is that your 10" mirror is a lot better at seeing stars than you are! So you might wind up with little dark holes in your final images where the computer subtracted a star's image in the flat from your lights. Hence the advice to use a diffuser even then. ---- *Of course, so do bias and dark frames, but amp glow, hot pixels, differential response, and the like are tied directly to the sensor and so show up consistently in the same spot .
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.