Jump to content

ollypenrice

Members
  • Posts

    38,263
  • Joined

  • Last visited

  • Days Won

    307

Everything posted by ollypenrice

  1. ...and it does interesting variations of the trick! However you want the trick doing, with whatever data you have, it does it. Olly
  2. Don't we all. Why is your Samyang fast? Because it is has a fast F ratio? No, that is the wrong way to look at it. It is fast because it works at low resolution. It puts the light from a lot of sky onto each pixel. That's great, but it means that you cannot capture small details. The way to speed up your present imaging systems is to bin 2x2 or 3x3 and, that way, you will put the light from more sky onto what are in effect fewer but bigger pixels. But you'll lose resolution, you say? Yup, that's right. There is no alternative. That's what your Samyang is doing. The bottom line is this: you have a certain camera with 'x' sized pixels. You want to image at a certain resolution so that means you need 'y' focal length. This focal length and pixel size will give you the image scale you want, making M33 a certain size on your chip. Fine. Now you want it faster. There are only two ways. You use a bigger objective lens of the same focal length or you use a dual/multiple rig. A focal reducer will simply have the same effect as binning. The same number of M33 photons landing on fewer pixels. Forget F ratio unless you are talking about a fixed focal length. Is a 1000mm FL scope of F5 four times faster than a 1000mm FL scope of F10? It certainly is. Is a 1000mm F10 scope reduced to F5 four times faster? Is an apple a banana? What does a 1000mm scope FL have to do with a 500mm FL scope? Nothing. Why compare them? Olly
  3. A one litre engine can be made to produce 50 brake horsepower, 75 bhp, 100 bhp, or even (yes, seriously) 200 bhp. Which do you think will do 500,000 miles? If you look at the threads on here, Takahashi are struggling to get reliable results out of F5 on the FSQ106 at £5-6K. What more do you need to know? Nobody has ever made 'fast and cheap' at serious aperture. Olly
  4. Carbon stars are utterly staggering to observe. 'Red giants' are orange. Carbon stars are red. 😁lly
  5. Registar certainly does it, yes. That's why it can co-register vastly different focal lengths for composite imaging. My poor brain can only cope with small doses of Pixinsight but I thought it could do the same, not that I've tried. What the Pixinsight team should study is just how easy it is in Registar. (Open two images. Click to register one to the other. Done.) Which image you use as master is up to you. A NB imager wanting to combine Ha, OIII and SII while using Ha as luminance at the end would surely align onto Ha. As you suggest, the logical choice for LRGB might be L. Olly
  6. You make a very good case. The idea that the focal lengths per colour are sufficiently different to account for your colour assymetry in stars can be tested using Registar. Registar not only aligns images when co-registering them, it also resizes them. In theory this ought to rectify the problem as defined by your analysis. If you don't have Registar I'd be happy to co-register your RGB files and return them to you once done. Just Dropbox them to my regular email, which I'll send by PM. Olly Edit, regarding occasionally difficult colour registration, it is very, very rare for me in either of the Tak FSQs or TEC140s I use regularly. When it does happen I use Registar to co-register the channels and that sorts it. For some reason I've found it best to make red the reference channel both in AA and Registar. Instinct told me it should be green but red seems the most reliable. I don't know why.
  7. APP and PI can do the resizing and re-aligning, though I do it in Registar. However, they do it 'mechanically' so they won't do a good job if there is a big difference in resolution. There's a technique called 'composite imaging' in which areas of interest are imaged at significantly higher resolution and blended into a lower resolution widefield. In this technique a purely mechanical combining of data will produce glaringly obvious differences in star size, star count and residual noise patterns. When I make a composite image I produce a resized, re-aligned high-res image in Registar but only combine it with the widefield in Ps where I can blend it in so as to make the result seamless. The Pixinsite team dismiss this as 'painting,' a taunt which I happily ignore! Here's an example of 0.9 arcseconds per pixel combined with a 3.5 arcsecs per pixel widefield. https://www.astrobin.com/full/321869/0/ If you already have PI the cheapest way for you would be to fathom out how to do it. That may not be easy, though! As the others have said, APP would probably be easier. Registar is not cheap but I regard it as close to perfect and love it. Olly
  8. As a simple soul I'd first want to see what happened if someone tried perfectly parallel tube rings machined out of one lump. I think we could all accept a slight misalignment but would it be slight? Olly
  9. What we don't know without testing is how well aligned two scopes would be if placed in perfectly parallel tube rings. Making perfectly parallel tube rings in the CNC age wouldn't be difficult - but would it be a solution? Olly
  10. That's not my point. What I'm saying is that you are likely to experience diff flexure on any multiple high resolution rig so that such flexure will defeat any mount, however good. Since lots of high spec mounts can deliver 0.3"RMS, which will be effectively perfect down to any image scale supportable by the seeing, you would probably get more accurate guiding by having three such mounts than by having an 'absolute' mount carrying three scopes, two of them mounted on alignment devices. (I think it very probable that the alignment devices are the source of most slave scope flexure but I can't be dead certain of that. Good as it is, my Cassady T-GAD may well be the source of the occasional flexure I experience. It looks like this: With dual rigs now popular it might be the time for an entirely new approach to alignment devices. I wonder if 'multiple tube rings' might be the answer? That's to say a set of parallel tube rings accurately CNC machined from a single billet. Would this give well aligned images? That would depend on the alignment of the lens cells and on the focusers. It would be worth a try. I think it might work but I can't say I'd be too surprised to find a misalignment of hundreds of pixels. The dual rig will not come of age until the perfect mounting hardware is designed. Olly
  11. With that budget you'd be far better off with three mounts! 😁lly
  12. Why do you feel the smaller sensor would require better guiding? My guess (which may be wrong) is that you're thinking the smaller sensor makes you more 'zoomed in,' so giving a more detailed image. It absolutely does not. What matters is the difference in pixel size. In a given scope the system's resolution is controlled by the pixel size. Smaller pixels give higher resolution and so need better seeing and better guiding. If I'm missing your point, here, I apologize. Olly
  13. Well, first consider a single OTA carrying a guidescope. This can still suffer from differential flexure but if the guidescope is mounted on the OTA at least some sources of potential flexure are elmiminated. (Anything arising from the dovetail or tube rings will be guided out.) Using an AOG will guide out all forms of flexure. Now add a second, or 'slave' scope. This must remain precisely parallel with the guided scope. It will also be mounted on an adjustable alignment device, which adds to the rigidity challenge. (You might be very surprised by how misaligned two scopes are when just bolted together onto a common dovetail. They do need careful alignment.) I can think of two reasons why the twin Taks always worked and the twin TEC sometimes trail slightly on one side. Firstly the Tak OTAs are very short and, therefore, easy for the mounting hardware to control. The much longer TEC OTAs have a far greater moment with heavy lenses at both extreme ends. (The TEC flatteners are real lumps of glass.) The second, and more likely reason in my view, is that the Tak resolution of 3.5"PP can absorb small amounts of diff flexure when the TEC system at over 3.5 times that resolution, cannot. One thing's for sure: many dual rig users experience flexure which is hard to trace. I know this because I receive quite a few PMs and emails on the matter and some of our guests run dual rigs as well. I had one such email just a few days ago. I've thought the same as MarkAR with regard to the construction of a dedicated twin/multi scope from the ground up. A manufacturer who came up with this might steal a march in the marketplace. However, the manufacturer would still have to achieve precise alignment of the lens cells. Maybe that would be routine, maybe not. I don't know. And then, alas, the focusers have to be independent so one very real source of minor flexure would remain. One ray of financial hope 😁: if you have a scope per filter you don't need any filterwheels. Less to go wrong, as well. Olly
  14. We don't actually know where you are, feverdreamer1. I'm in SE France and have a minimum of nearly 4 hours of astronomical darkness on the shortest nights (moon not counting.) The sky is absolutely stuffed with DSOs for all focal lengths at the moment but as you go north, so you lose the darkness. Bear this in mind. Olly
  15. Multiple refractor rigs are very nice. I was in near the beginning of this some years ago with a dual Tak 106 rig and now have a dual TEC140. However, they are not without their vices. My own experience is that at low resolution they are easy. The dual Taks/11 meg Atiks just worked out of the box but at 3.5 arcsecs per pixel. It ain't so easy with the dual TEC140 and others, like Peter Goodhew, have reported issues with two high res refractors. Our dual TECS are working at 0.9 and 1.1 "PP and the slave scope (the one not carrying the guider) sometimes trails slightly. Peter found this as well and solved it with an active optics unit on the slave scope. Where does the flexure come from? Postal order for anyone who can tell me. We have a very expensive and very good Cassady T-GAD alignment device to get the slave aligned with the main. You can no longer buy these but you'd need two of whatever replacement you found (FLO do a good one). I was offered a third Tak/11 meg camera by a person who liked what we were doing and wanted to join in, but for me two was enough. There is even a school of thought which says it is, in the end, easier to use two (or three) mounts. £££££££££. 😁 Low res widefield, the multi refractor is a winner. At high res the flexure gets difficult. Olly
  16. Second hand items are always in stock... Olly
  17. You could rotate the panel during the run of flats. It wouldn't matter if it were moving during a single sub. Just keep turning it steadily during the run. That will blur out any unevenness. Even with a good panel I tend to turn it once or twice for good measure. Olly
  18. Quite near the top of the mountain, in fact! 😁 Let's get back to basics, first in theory: the resolution of detail involves 1) the resolution of the optics which, when they're diffraction limited, is proportional to aperture. A 127mm scope has a theoretical maximum resolution of 0.91 arcseconds. It can distinguish details 0.91 arcseconds apart. Details closer together than that connot be distinuished. There is no way round this rule. 2) The resolution in terms of the camera-and-lens as a system. The unit of interest is arcseconds per pixel. (How many arcseconds of sky land on one pixel?) If the system is working at 2 arcsecs per pixel any two points less than 2 acsecs apart land on the same pixel so will not be distinguished. Whichever of these values is the worst in terms of resolution will define the best resolution of which the system will be capable. Now for the practice. Photographic systems with high resolving power soon fall foul of other limiting factors which override their theoretical resolution. These include: 1) The seeing. The atmosphere distorts the incoming beam, sometimes by very large amounts. It isn't unusual to be limited to 2 arcseconds or worse. Sometimes it's also better than that. Local conditions determine the limit. 2) The guiding. Divide your imaging pixel scale by 2 for a rough idea of the guiding precision you'll need to support that resolution. A good (not an average but a good) mass production mount can usually manage a guide error of about 0.5 arcsecs. That means that you are, in reality, limited to 1.0 arcsec per pixel for imaging. So even if we ignore the optical limitations of 127mm we find you are trying to image at 0.5 arcsecs per pixel. This would need incredibly good seeing (unlikely) and a guide error of 0.25 arcsecs. (Also unlikely! Our excellent Mesu 200s deliver around 0.3" at their best.) Enter another key term, empty resolution. If you open a high quality image in Photoshop, one which was correctly taken at the limit of the system) and look at it full size on your screen you are seeing all it can give. You can hit Ctrl+ and make it bigger. Or hit CTRL+ again and make it bigger still... but it contains no new resolution. Doing this is a waste of time. The detail just isn't there. And so it is with imaging at unrealistic pixel scales like 0.5"PP. The same information will be landing on several pixels. You'd be better off with bigger pixels which would 'fill' faster. One way to do this is to bin pixels 2x2 or 3x3 where this is possible. Olly
  19. We did the 35 panel Orion manually. You can use a planetarium which can show the framing you get from a single sub and map out your frames on that, noting the central co-ordinates of each one to use as your 'Go To point for that panel. The Mk 1 Mesu is not ASCOM compatible so using software wasn't an option. It's perfectly possible that way, though getting it automated via software and plate solving would no doubt be easier. One system not mentioned so far is EQ Mosaic with EQ Mod for those who use it. Tom introduced a handy numbering system using the term Row for right-left and Column for up-down, so a panel might be R2 C6, for example. It's very logical and keeps your mind in working order!! Olly
  20. This is really great news when you think about it. Any good news is quadruple good at the moment. Congratulations on such a positive post at a time when it's needed. Just take care with that working week, Steve. Nobody is indestructible. FLO deserves to succeed. All that positive feedback has real origins. Olly
  21. ^^ Sound as a pound. Let's not forget how many APODS and Pictures of the Day have been shot with the Kodak 11 meg and Tak 106 at 3.5 arcseconds per pixel. Olly
  22. There is another way to orientate a camera. First use a spirit level to set your counterweight bar to the horizontal and lock it. Then set the OTA to horizontal as well. (By eye will do for this.) Finally use the spirit level to set the camera to horizontal. DSLRs and square bodied astro cameras make this easy. You just hold the level agains the flat part of the camera. If you have a round bodied camera you can use the method I first described to set the camera angle to orthogonality with RA and Dec and then set the mount axes to horizontal. Put a piece of masking tape almong the back of the camera, hold the spirit level against it horizontally and draw a line on the masking tape. You can then hold the level against this line at any time in the future. This method is not quite as precise as the camera slew method but it's not at all bad. Olly Edit: this only works for refractors, Maks, SCTs etc which have a linear light path from objective to camera.
  23. Can you at least leave the camera on the scope and take the lot off the mount as one unit? That means one set of flats will be OK, almost certainly, and your camera angle won't change. Another good practice is to have your camera orientated along RA and Dec. Take a (say) 10 second sub while slewing at a slow slew speed on one axis. You'll get trailed stars. The angle of the trail is the angle of the camera. Get the trails horizontal or vertical and then you can easily replicate that camera angle. A random camera angle is very time consuming to replicate. Don't do it! Olly
  24. Great Ha result, Carole. The Squid is indeed horribly faint. When processing, have you tried de-starring the OIII then giving it an unwholesome, manic, full-on madman stretch? It would need de-noising on a grand scale but might allow you to pull it into the green and blue channels. I promise you, my Squid was not pretty as an OIII image. Olly
  25. Good and varied star colour sets off the Ha feature really well. Olly
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.