Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

ollypenrice

Members
  • Posts

    38,138
  • Joined

  • Last visited

  • Days Won

    304

Everything posted by ollypenrice

  1. You have it in one. I've been running imaging workshops, giving tutorials, demos, whatever you want to call it, for years and I've been trying to improve my images for more years than that. The one thing I insist on is learning to look at the image. When you have learned how to look at it you can see what needs attention. We can all stare at an image and fail to see that it's green. Or clipped. Or over saturated. Or just ruddy hideous!!! To combat this, I built in certain rules in my workflow. 1) Measure the background sky at regular intervals. Ps lets me see its brightness and its colour balance in RGB at a click. I want it between 20 and 23 and equal in R, G and B. 2) Keep looking at the histogram. Is it clipped? 3) Keep doing hard test stretches. You're not going to keep these stretches but is there any faint stuff that you've failed to drag out? 4) Take a break and look at other astrophotos you like, but not of the object you're working on at the moment. You are not trying to replicate existing images but extract the best from your data. I'd also look at good astrophotos and ask yourself what's good about them. This will make you a better critical observer. Olly
  2. An interesting thought. It hadn't occurred to me that the CMOS camera might have altered the demographics of astrophotography but I think you might well be right. Olly
  3. This is never simple for my 70 year old eyes but, getting up for a call of nature at six this morning, I went outside to find a truly sensational sky. The stars were ablaze. I picked up a pair of 8x42 bins and had a cruise. The Pleiades nebulosity was easy, an unmistakable glow around and within the cluster. It was an inspiring little tour and unexpected. Recent weather has been astonishing, too, with clear blue skies and temperatures hitting 32C in the afternoons. Olly
  4. It's a rigmarole but, if you are going to de-star the main image, you could shoot a star layer quickly (short subs and not too many of them) using a much larger overlap which would allow you to discard the bad edge stars. There's not much noise in stars and you want to keep them small anyway, so total integration could be short. Joining them into a mosaic ought to be painless because the joints only show in the background signal and that won't be applied from a star layer. Warning: I suggest this without having tried it! Olly
  5. The discussion has become rather academic since the cooled CMOS chip arrived but that's quite recent. Regarding CCD, though, I was a firm advocate of the long sub. I routinely shot 30 minute subs in Ha and, occasionally, in luminance. In practical comparisons I was entirely satisfied that, when looking for faint signal, the long subs were the winners. When looking for the outer glow around M31 I found it only when I switched to 30 minute luminance subs. I know several very experienced imagers who agree with this and some who don't. With an uncooled DSLR the build-up of thermal noise over long subs is a variable affecting the decision. In any event, I would urge you to experiment since, had I not tried it for myself, I might have believed that 10X15 in CCD equaled 5x30. I found that it didn't. The first gave a smoother result, the second a deeper. Using CMOS in very fast optics we just use 3 minute subs across the board.
  6. Quite simply, this isn't commonly accepted and has been the subject of endless debate. The relationship between 'more and shorter' and 'fewer and longer' depends on the camera technology in question. What are now 'old technology' CCD cameras had significant read noise so you got one dose of this noise per exposure. This made 'fewer and longer' advantageous because you got fewer doses that way. Modern CMOS cameras have remarkably low read noise so the penalty of read noise per exposure is reduced to very little. In any event, signal must overwhelm noise and the 'zero noise' camera does not exist, so that gives us a bottom line. We must also remember that modern cameras have high pixel counts and that a serious image might need twelve hours. Now Alan, above, images with an F2 RASA - as do I. We can go deep in three hours. Turn that into twelve hours and ask yourself how many subs your computer can calibrate and stack. Olly
  7. Imaging at this speed is simply a different world, as you'll see! Olly
  8. Why easier to balance? I think it's the opposite. You can put a guidescope on a sliding dovetail to move it fore and aft for fine tuning in Dec without having to struggle with the main scope. You can also position the guidescope off-centre as a way of getting dynamic balance right. The usual instructions on how to balance assume a system which is symmetrical in balance side-to-side but, with focus motors etc these days, it won't be. I like being able to use guidescope position as a way of reaching balance. It can also offer be a way of moving your OTA up or down in the clamshell, in cases where hitting the tripod/ground or hitting the observatory roof is the issue. Olly
  9. I think there are good reasons not to do so... I say this because I've swapped out so many of the things on the robotics setups I host. Olly
  10. I'd only use an OAG if I had to, and that would be with a reflector and the likelihood of mirror flop. (This is a rather extreme term for small amounts of mirror movement.) Given that many very high-end setups at long FL run on direct drive mounts without autoguiding, flexure can't be that much of an issue. With a C11 I'd err towards an OAG. On small refractors, I'd consider an OAG to be bonkers, quite honestly. You disturb it every time you do anything to the imaging scope. On my refractor rigs I would say that I never touched my guidescope-guide cam once in ten years, other than to scrape spider webs off the lens - when I remembered. Olly
  11. It does suit some people and not others. The reason I'm more at ease in Ps is that I like layers. I can copy an image onto a new layer, modify it and then decide where I do and don't want to keep the modification. I don't have to struggle to make a mask that covers just what I want it to cover, I can just erase small areas of one of the images. Olly
  12. I'll just back up praise for Photoshop for post processing. If it's going out of fashion for others, it ain't going out of fashion for me. I find Pixinsight wantonly obscure but, if you don't have it, you have a problem: it's the only platform (I think) which supports Russel Croman's BlurXterminator and that is a deconvolution routine like no other I've tried. My routine, after stacking is, Pixinsight: DBE or ABE. SCNR Green. BlurXterminator. Photoshop: Everything else. Be aware, there is a lot more to learn in processing than there is in capture. It should not take you long to capture data at the limit of your equipment. But you will never reach a point where your processing gets the very best out of it. Olly
  13. There's no point in using short exposures if you don't need them and it's easy to see if you do. Take a test sub and read off the brightness value of the brightest part. If it's not saturated in linear (unstretched) form there is no need for it to become saturated with careful stretching. Leaving yourself a bit of a margin would be helpful, though. With globulars, it's all in the stretch if you're not saturated. A handmade Curve with a heavy lift at the bottom and flattening early is what's needed. Alternatively, you can do two stretches, one gentle to keep the core controlled, the other hard to bring in the outer stars, and blend them using a high dynamic range technique. Just mixing long and short in a single stack won't solve any problems. You do need an HDR blend. Photoshop has one built in, there are proprietory ones available, or you can use layer masking. I use this method in AP: https://www.astropix.com/html/processing/laymask.html Olly
  14. I think it's simply that the spikes are reduced to vestigial extensions close to the star and don't extend into full spikes. We just see the four spike 'bases,' if you like, and these tend to form a square. I haven't knowingly seen any Sharpstar images but I've seen the phenomenon on Tak Epsilon images and also on images from the Vixen Cassegrain which had thick vanes and no corrector plate. I'm absolutely not a pixel peeper and found them pretty obvious. Olly
  15. Obviously a dew magnet? Certainly not. The RASA 8 is the least dew -prone imaging optic I've ever used. We have never, ever, had any dewing and we have no dew heater. With a short dew sheild (I made this one) the camera warms the air around it and its fan also keeps it circulating. The fact that it is a zero-dew rig is a selling point and, for me, a real luxury. This is not an SCT! At a given focal length F2 is a hell of a lot faster than F2.8. 4 minutes versus 7.84 minutes. Nearly twice as fast. That's a lot. I don't agree on resolution. The RASA 8 is not diffraction limited and speed tends to work against resolution anyway. It does not resolve at the level of a slower 8 inch instrument. However, I think the relevant and meaningful comparison is how it resolves against other possibilities offering the same focal length and, in this comparison, it's fine by me. I maintain that it does better on non-stellar detail than stellar. Vlad insists that this is impossible. Hey-ho. I think the OP can decide by looking at RASA images and seeing if he'd be happy with that resolution. Diffraction spikes in a FL of 400mm means... a lot of diffraction spikes. Tomato is right to flag up the QC issue. I would only buy from a retailer who acknowledged this risk and would offer no quibble returns. Tilt is fixable, the stars could be better, but I've never had as much fun as I'm having with the RASA, despite having at least 250 clear nights a year. Olly
  16. I don't know about the time balance (though it looks fine to me) but your total integration times are the real thing, and so are the images. All the images have a real 'three colour dimensions' feel to them, the Soul nebula above all. That's a wonderfully broad gamut. Olly
  17. Lovely image. Do you have a slight channel alignment issue? Stars seem to have a blue upper edge and a red lower one. More precisely, it seems to be on the clock face axis running 11.00 - 5.00. Original: Red channel moved up by 0.5 pixel and left by 0.2 pixel: Olly
  18. I'm terminally old school and would always prefer to ditch all hubs, mini PCs etc., and run a large number of direct USBs into a desktop PC with enough USB ports. Olly
  19. More Milky Way adventures with Paul Kummer, Peter Woods and the Samyang 135. Paul did the capture, stacking and stitching, taking this as a final crop from 7 panels. Just an hour per panel, wide open at F2. Anyone doubting the existence of Nessie can see her strolling along just above the Coathanger. lly
  20. Nicely spooky, as it should be. Olly
  21. That looks well resolved to me. Olly
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.