Jump to content

ollypenrice

Members
  • Posts

    38,261
  • Joined

  • Last visited

  • Days Won

    307

Everything posted by ollypenrice

  1. It's a Meade LX10 which was the entry level version of this instrument, but there's nothing wrong with that. Indeed it has advantages for visual use. You set the base to level with the bubble level, point the wedge towards north, set the altitude scale to your latitude and switch on the tracking. You then use the finder to locate your objects. With a long focal length like this you have a small field of view, so the earth's rotation will drive targets out of view pretty quickly without the motor drive. This was my first 'proper' telescope nearly thirty years ago. Olly
  2. We can argue over the niceties of the data but it is screamingly obvious that a guide trace like this is going to take some beating. Mine are both very similar. On top of that, my Mesus are now 9 years old and they have been performing like this, every time I go out, throughout that time. This is in commercial use and they have had no maintenance at all in that time. (And, for what it's worth, both were bought second hand, though one was here from new.) There are three other Mesus here and none of those has gone wrong either. (The same cannot be said for 10 Micron, iOptron or Takahashi mounts based here., all of which have misbehaved on occasion.) The only other mount from which I've experienced the same mechanical reliability is the Avalon Linear. Honoroable mention must go to my pair of EQ sixes, though, which soldier on impressively. What is more, the Mesu, though not cheap, is probably the least expensive mount in its payload-precision class and by some margin. Olly
  3. Elements is a very lightweight version of Ps and would not, in my view, suffice for proper AP processing. Anything from CS2 onwards will do fine. I have CS3 and the up to date rental version on my PC but I often default to CS3 because my fingers are more familiar with the shortcuts etc. Olly
  4. I think the thread was better off without cameras since since we've already done to death the misconceptions which abound regarding F ratio and, worse, that abomination of a term, 'crop factor.' The matter of visual surface brightness is very involved, even more counter-intuitive than it is involved, and is not helped by the introduction of pixels. Already we've seen the claim that Andromeda is magnified in astrophotography. Really? There must be some big chips out there!! Olly
  5. So having darks made with light getting in wouldn't over-subtract values from the lights?
  6. It might be a software matter as vlaiv says, but it might be more simple: how did you make the darks? I did some experimenting and found a significant difference between darks a) made with the camera off the rig and with the metal chip cover screwed on and b) made on the scope (a refractor) with the lens cap on. The ones done properly, off-scope and under the chip cover, were measurably a little darker. The lighter darks would doubtless clip the lights. I've read posts in which Newt users have done darks on the scope and really don't see this working when the bottom of the tube can admit light. Olly
  7. I'm not surprized. I'm the son of a perception theorist and, as a result, have kept well away from the matter! It is a minefield. I'm not sure about this. 'It looks brighter,' is all we need to know as observers if that means, 'It now looks bright enough to see whereas previously it didn't.' It is perceived brightness in which we are all interested. If that is in conflict with some kind of measured brightness (and I rather doubt that it is) then we can happily ignore measured brightness. I suppose another matter might be the darkness of the background sky in an optical system. The darker the better, so that might be an important player in comparing two instruments. (I scent the unsvoury whiff of the refractor-reflector debate in the air!) Olly
  8. Well yes, the statement, 'It looks brighter but isn't,' would not send it's author to the top of the class! 🤣 Olly
  9. Thanks for the heads up. I get very mixed results from Starnet and also find that either it or my PC cannot handle the images from the ASI2600 I'm now processing. These are around 140meg. Does anyone know what kind of processing demands StarXterminator makes? I would spend £45 on it in a heartbeat if I were sure it would run. I know it's quite a price but de-starring is so powerful when it works. I don't want to publish starless images, I want to de-star and re-star with smaller stars, which is dead easy in Ps. Put the starless image on the bottom, paste the linear original on the top, set the blend mode to lighten, and stretch the linear image till the stars appear at the level you like. This will improve star colour intensity by stretching them less, preserve their relative sizes as captured and let you keep them small. And it saves you the hassle of masking and star-reducing as well. Olly
  10. I'd be dragging the conversation backwards but my eye tells me that big scopes do make things brighter, even allowing for exit pupil similarities. I'm another who has been shot down in flames over this but I have never been convinced that the effect can be explained just by increased surface area. I've always been left with a lingering doubt so Vlaiv's original post came as something of a relief. Olly
  11. I think that OIII, being on the green-blue border, should be within the 'safe' performance end of the scope's optical correction but refocus will be necessary. I'd also mention that manufacturers seem to struggle to make good OIII filters. I have had two dreadful ones from Astronmik and two bad ones from Baader. Also, it took Baader a very long time to bring their tighter bandpass OIII to the market, presumably because of production difficulties. You may have a bad filter. Olly
  12. This is all very interesting because I have never found the standard arguments about surface brightness to agree with experience. Please ignore me and carry on! Olly
  13. I don't know what they drink in Serbia but I want to try it... Olly
  14. Agreed. The thing is that, in quite a short time, you'll get your captures up to the point at which they won't get any better without changing your kit and we all have an upper limit on what we're willing or able to buy. However, your processing can get better and better and better, ad infinitum - but you need software for that. It's part of your imaging observatory. You need software with which you feel comfortable and (I really think this is under-discussed) which you enjoy using. A happy imager is a creative and inspired imager. Olly
  15. My feeling is that the proper Hubble pallette is perfectly valid as a colour map of gas distribution. That's why it was invented. SII is mapped to red, Ha to green and OIII to blue, so it's a full tricolour system. A geology map might map different rock types to different colours. There is no resemblance intended between the natural colour of the gas or rock and its colour in the image. 'Fake' Hubble Palettes don't have this validity but people like making images that way because they like the way they look, which is fine by me, though I don't do it myself. If they are bicolour they are not, in truth, Hubble palette at all, they just end up with similar colours. In truth I think you can tell bicolour from tricolour anyway because there is a more restricted gamut in bicolour. The other way to exploit NB data is to use it to enhance the colour channel in which it really belongs, so Ha to red and OIII on the green-blue border. Olly
  16. That's not quite what I do. My first step, after an edge crop, is to run the linear stack through DBE or ABE in Pixinsight. This gets the background right in terms of flatness, freedom from colour gradients and, usually, colour balance - so that part of 'setting the background' is not done in Ps but in PI. The part that I set in Ps is just the background brightness value (between 21 and 23 depending on target and data.) If, at this point, I'm not at colour parity in the background I'll make a small adjustment (it's only ever small) probably using the dead simple colour balance slider set to shadows. Many tutorial-makers in assorted programs continue to stretch the full histogram beyond this point and bring in the black point again. I used to do this but now prefer to pin the background and stretch only above it, so as not to raise the background noise level. I do this using Curves but you can use any stretch you like under a mask for the background if you can make the right mask. I prefer just to pin in Curves. (It's more complicated if you have dusty areas of interest below the background sky brightness. That requires a different approach.) ABE and DBE are not the only gradient removing tools. There are now lots of other good ones but I just use what I know. Olly
  17. The ST80 is very cheap - I'm tempted to say disgustingly cheap 😁 - and gives a wide field of view. It makes a demon guidescope and finder for a large Dob, as Peter said. If you want something to do it's job rather better, try and find an elderly TeleVue Genesis, but it won't come at ST80 prices however old it is. I don't know the contrast booster but no erecting prism I've ever tried was satisfactory for astronomy. One thing to try with your ST80 would be to stand it vertically on a hard, solid surface with the lens down and tap the tube lightly with a wooden spoon or suchlike. Do this for a few minutes. On the ST range it can settle the two lens elements into better collimation. Olly
  18. Very nicely explained. I had assumed that this would be the case and it was confirmed yesterday when I looked at ST for the first time. The initial stretch, and subsequent iterative 'local' stretches, describe the stages I take in Photoshop when developing the histogram, though (because Ps is not astro-specific) I do it through manual intervention by either shaping the curve by hand or by stretching through masks or blending different stretches in layers. The difference between ST and Ps, if I have this right, lies not so much in what can be done to the histogram but in the means by which it is done, ST having an astro-specific user interface designed to anticipate interventions which will be productive. Would that be fair? My objection is to Alacant's implication that the best you can do in Ps is 'stretch and hope.' This is quite simply wrong but, for it to be wrong in practice, the imager does need to think through all aspects of the stretch, both global and local. I've come to like starting with a blank sheet, so to speak, when stretching but I'll enjoy working through ST's astro-structured approach. Thanks for your excellent reply. Maybe! 👹😁 Olly
  19. It might be worth distinguishing between remoteness and automation. They are not the same thing. I host six instruments whose owners are in other countries, so that's 'remote.' Then there are levels of automation. At one extreme you might set up ten years' worth of imaging runs, set off to walk around the world, come back at the end of it and start processing... If you are only as remote as your garden you need very little remoteness and, strictly speaking, no automation since you could pop out to focus with a Bahtinov mask if you felt like it. Or you could use a motor focuser controlled from within the house by yourself while looking at the FWHM values on your PC. Or you could use a software programme to do the FWHM reading and motorized adjustment for you. Remote-automated can be as extreme as you choose. Olly
  20. That's my understanding as well. CMOS bias are unpredictable. Let's think when you might use bias with CCD: 1) As darks for flats (AKA flat darks) which is fine for CCD but not for CMOS.) 2) Instead of darks, particularly if using a bad pixel map as well. (I like this with my Kodak full frame CCD but it won't work for CMOS.) 3) As a reference frame for stacking software which can match darks of the wrong exposure time to lights. (I don't see this working with CMOS bias either.) The bias signal is contained within the dark frame in any case so it must not be double-subtracted. In a nutshell, why would you want bias at all? I can't think of a reason with a CMOS camera but I'm new to them. I think I've commented on this flat elsewhere but what I find odd is the absence of vignetting in just the lower left corner. That makes me very suspicious. I'm not surprised by the red-green gradient because we see that in our lights from this camera, though in our case it's rotated 90 degrees. The image is now looking good with just a hint of the vertical banding which longer exposures might overwhelm, as Wim says. Olly
  21. So far we have made no darks with the 2600 because it's a bit involved with the RASA 8. (We will need to take off the camera to do them, which means we'll also need to know our flats exposure times in order to do flat darks at the same time.) Since not using darks seems to give clean results we haven't made it a priority to try them, and the F2 of the RASA makes dust bunnies so far out of focus as not to matter, apparently.) I'm not saying you don't need darks because we haven't yet tried them, but some other users of this camera, whose images are good, are not using them. I'd certainly try a stack without. Olly
  22. The FITS file was corrupt for me, too. What do you do with the bias from this camera? I'd have thought they would be redundant. There's a lot of vertical banding in the image, which comes as a surprise, and may arise from calibration. It can be reduced using Noel's Actions (Now Pro Digital Astronomy Tools) 'Reduce Vertical Banding,' but killing it at source would be much better. Olly
  23. Probably A for me, though I'd really like somewhere just a little more towards B. Sharpening is just visible in A. Always a tough one... Olly
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.