Jump to content

ollypenrice

Members
  • Posts

    38,261
  • Joined

  • Last visited

  • Days Won

    307

Everything posted by ollypenrice

  1. I don't know why you say this. None of the 'is' features detracts in any way from the scope's visual performance, so far as I can see. Am I wrong on this? Yes, the visual observer gets refinements they don't need but they don't do any harm, either. And, yes, these refinements add to the initial price. However, in terms of resale value, I think they will more than pay for themselves since they open up the imaging market to the vendor, and this makes for a much wider customer base. I had an original Genesis designed purely for visual. Wonderful scope but it did have some false colour. The colour correction improved on successive models to make them far better for imaging but still a little better for visual. A win-win, no? In the 19th an d early 20th centuries the camera's preferential sensitivity in red did mean that manufacturers made visually-corrected and photographically-corrected versions of a lens and you sometimes see both mounted in parallel on classic old scopes. However, in modern designs I'm not aware of any compromises along these lines (which doesn't mean there aren't any. I'll gladly stand corrected). My TEC 140 is a visually-optimized design with best focus in green but its correction is so good that it's also a stunning imaging scope. Olly
  2. Back on spikes in Newts for a sec... I don't see them, other than very exceptionally. For many years I had a 20 inch with normal 4-vane spider and hardly ever saw them. I'm surprised that this seems t be unusual. Olly
  3. I'll guess: counterweights horizontal, bubble level horizontal, camera properly orientated. I do it myself! Marcel Dreschler uses a 4-vane 'spider' to route his cables on a RASA 11. Olly
  4. I'm helping Peter, a robotic shed client, set up his rather tasty Avalon M-uno/Baby Q and we came across the problem of the heavy focus motor off to one side. Dual rig owners are used to the need to balance in this extra axis but solo setups can often ignore it. However, most focus motors cause an offset to one side of the OTA. Using bits we had, we came up with this solution: A mini saddle plate is bolted to the clamshell at right angles to the optical axis and the guidescope is attached to its dovetail at right angles to the normal way. This lets us move the guide scope, on its dovetail, right and left to offset the focus motor's weight, giving us perfect balance in any orientation. By good fortune we found two holes to allow the guidescope to bolt rigidly to its dovetail but, at right angles to its design orientation, the mini saddle plate is located by only one bolt at present. It's a hefty bolt, though, and we're not worried. @FLO This might be a product idea if done properly. Ideally a bracket for the top of the tube rings or clamshell would carry an arm allowing the guidescope to be slid then clamped off axis. It would be a bonus if the guider could also slide backwards and forwards to fine tune balance in Dec. (This can also give control over the position of the main scope in the main saddle plate. Moving the the guidescope lets you balance the main scope further up or further down its saddle plate in order to clear piers, observatory roofs, etc etc. It would be nice to buy a proper version of our lash-up with holes in the right places, etc. Olly
  5. Bravo. That's the best DSLR result on this that I've ever seen and a good result by any standard. And, as Tomato says, the OIII coming through is particularly nice. Do you use a large dither between subs? I'm not a DSLR astrophotographer but Tony Hallas recommends a large one of 12 pixels or so. Olly
  6. I think Photoshop is the world's leading graphics program for good reasons. Of course it isn't astro-specific and, like you, I cherry-pick some PI routines, but Photoshop's genius lies in its user interface, which turns its underlying mathematics into metaphors from the days of darkroom and printing. I like this because I am making a picture which I am going to look at with my eyes. In Ps I can use my eyes to do this. I can make adjustments in real time and see them in front of me. This seems logical to me and suits the way my mind works. (This contrasts with Pixinsight whose user interface turns its underlying mathematics into visible mathematics.) You can get comparable results in either program, the question being, 'Which do you prefer?' I like being in the Photoshop environment and I do AP for enjoyment, so that's my choice. The key thing in processing is to do different things to different parts of the image and both programs allow that. As for 32 bit, I struggled to see any difference when we all moved from 8 bit to 16, quite honestly. If 32 bit is essential, can we see the proof in the form of two images side by side? Olly
  7. My feeling, like Vlaiv's, is that everything causing bunnies will rotate with the camera and so be unaffected. If your objective's beam is on-axis then vignetting will be so nearly symmetrical as to be unaffected by rotation as well. However, my other feeling is that it is counter-productive to rotate after a flip. You are losing an opportunity to redistribute the residual camera noise and so reduce it. Or so I suppose. You could put one flat as a layer on top of another in Ps and set the blend mode to 'difference' to see how different they are. I use flats for months at a time. Never a problem. Olly
  8. That's absolutely gorgeous. The wide mosaic makes the object fresh in our eyes again. Great job on Rigel, too, since it often overwhelms attempts at this field of view. The other stars are excellent, too, both in size and colour. Olly
  9. Darks can't see out of the scope so there's no point in dithering those! As Steve says above, if using a fixed light source for flats you don't need to dither. If you're at all worried that your light source is uneven you could move it around a little or rotate regularly during the capture sequence but that's unlikely to be necessary. However, picking up stars is indeed a risk if doing twilight flats. Leaving a few seconds between flat exposures will, if the mount is undriven, give you natural dither as the stars move. It will also stop your chip heating up which can sometimes happen with flats. Olly
  10. Most capture software, if not all, will move the wheel for you. It can also refocus if you have a motorized focuser. But a word on this: it's often asserted that you need to refocus between filters with a mono system and not with an OSC. This would be true if the filters themselves caused any change in focus but usually they don't. The focus usually changes because the optics are not perfectly colour corrected, meaning red, green and blue have slightly different focal lengths. When this is the case it would be more accurate to say that, with a mono system, you can correct the focus between colours whereas with an OSC you can't. In truth it usually makes little difference. I focus in luminance and then run an LRGB, LRGB sequence without any problems. This is the same focus compromise as you make with an OSC camera, in reality. I work with both mono and OSC. Modern CMOS OSC is better than old CCD OSC for some reason but, when the moon comes out, the OSC is frustrated. Olly
  11. One of the technical guys on Admin, maybe? Grant, perhaps? I'd double check that the same copy was sent to both sites, though. Is your colour profile sRGB? Olly
  12. I wondered if this might be an illusion created by the colour in the rest of the page but, no, your SGL screen shot is indeed darker. Here I copied one onto the other (Astrobin over SGL) and deleted a square from the middle of the AB image to reveal the SGL. I've no idea why this should be happening and I don't think it happens to my own images because I'm very fussy about background brightness. Olly
  13. I do use them on a co-owned robotic rig but I wouldn't want to frame up just based on a plate solution. Olly
  14. Of the million things you might want to adjust, start by aligning your camera along RA and Dec. It's dead easy: eyeball the camera in the scope then take a 10 sec exposure while slewing slowly in one axis. You'll get star trails. Are they parallel with one or other side of the chip? Repeat and rotate till they are. Mark you gear (Tippex?) so that the camera always goes back in the same way. Once you've done that you'll know where your tracking errors are, in RA or in Dec or in both. You have to start somewhere, so I suggest you start there. Olly
  15. I don't think the Mesu/Argonavis is astonishingly accurate. It is astonishingly consistent, certainly. But will you take a set of exposures based only on the plate solution given by your software? This would be unthinkable for me. I want to see what the whole frame looks like, get a feel for what's going on in the picture. That's how Tom and I decided there was more to IC 59 and 63 than was usually imaged and produced our 'Breaking Wave' image. We thought, 'Hang on, what's going on at the bottom of this frame? Does the nebulosity stop there? Let's nudge the mount down a bit and have another look.' https://www.astrobin.com/full/295716/B/ Olly
  16. I just put its name into my Argonavis handset and hit GoTo. And there it is, gone to! However, I then look around the borders of the image on the lookout for an attractive composition so far as field stars are concerned, or how a nebula sits within the frame. Any photograph intended to be good to look at needs a favourable framing in this way. I always want to avoid a half-included bright star on an edge, for instance. If the target is Ha dominated, as many are, then I'll try to frame up to include a contrasting blue star when possible. This stops the image from looking as if it has been wrongly colour balanced in favour of red. (Peter Shah's recent Helix used such a star to great effect, I thought.) In order to re-frame quickly on a subsequent night I'll bring up the screen reticule and note the position of a reference star relative to some obvious point on the reticule. (You can actually save three star positions in the capture program but I can never remember how to do it and since re-framing is a non-problem I just do it this way. If my target isn't in the handset's database (unusual) I drive to it using the mount's digital setting circles and, again, make a note of the co-ordinates I've finally chosen. My main mount is not ASCOM compliant so plate solving isn't an option. I could change it for something else but, in ten years, it has never dropped a sub. Would you change it??? 😁lly
  17. Super mount, Mark, and also a super helpful video. Olly
  18. I like the image. What's interesting is that your colour variation (redder round the outside, greener towards the middle) is in good agreement with what we see in comparing Ha and OIII filters on this target. This suggests that the filter is doing a good job of not damaging the data it's passing. Olly
  19. 🤣 That would be me. 🤣 In fact I don't think it's bad, I think it's good unless it causes the plate solver to disconnect their brain and throw it in the garden pond, as it sometimes does. We've seen people post images of some bit of random sky and say, 'I can't see M51 in this image even though I spent four hours collecting data on it,' or, 'I used plate solving to shoot these two mosaic panels but the software refuses to join them,' when it is perfectly obvious that they don't overlap, whatever the plate solution tells them. Plate solving is fine but inevitably it introduces an extra piece of software, sometimes two. Sometimes there are eight rigs imaging at my place and what goes wrong most frequently? Software. And in second place? Software. So use it by all means but don't waste time faffing around getting it to work under a nice clear sky. Olly
  20. Gosh, no replies to this excellent image. Apart from the elongation it's very good indeed. Good colour, good light background sky with no clipping. There is a dodgy Photoshop fix for that kind of elongation. 😁 It's very simple: Make a copy layer. Change the blend mode to Darken. Go to Filter-Other-Offset and move one layer relative to the other along the line of elongation. In this case it's easy because it's vertical. I went for a 3 vertical pixel offset as shown... Before: After: If the elongation is not parallel with the chip you can rotate the image till it is, follow the procedure then undo the rotation. I should imagine that this will also be possible in other layers-based graphics programs if you don't have Ps. Again, what a good image. Olly
  21. An alternative view: Guiding? Just do it. It's the life blood of astrophotography, it's easy and and it makes everything else easier. I was given this advice when I started and it was a minority opinion then and probably still is, but I think it was good advice and I repeat it here. What do you need? Optics of some kind. This could simply mean an old camera lens or a tatty old achromatic refractor which you could pick up at a car boot sale, or whatever, for next to nothing. You need to bolt it down firmly onto your imaging scope but you do not need expensive guide rings. These became obsolete years ago. Then you need a guide camera. This could be an old second hand Atik16ic, for instance, or some equivalent camera. Really anything with an ST4 port will do. One of these, in an elderly Skywatcher ST80, has been guiding our very expensive rig from the outset and works perfectly. It doesn't need to be pretty. Ours isn't. Don't complicate things by bringing in third party software and pulse guiding via ASCOM, just plug in an ST4 cable to connect camera to mount and guide in PHD2, which is free. Imaging without guiding is like trying to sign your name with someone nudging your elbow. Olly
  22. I've been cleaning fluorite Taks for years with Baader fluid. I don't know if the exposed element is fluorite, though. I think it's at the back, from memory. Olly
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.