Jump to content

ollypenrice

Members
  • Posts

    38,263
  • Joined

  • Last visited

  • Days Won

    307

Everything posted by ollypenrice

  1. I'm not entirely in agreement with your conclusions on priority. Mine would be mount-camera-scope, but not to worry. The 120 is neither better nor worse than 100. If you want a wider FOV the 100 is better. If you want higher resolution the 120 is better. But, of course, it isn't that simple! (It never is.) If you went for the 120 what would your sampling rate be in arcsecs per pixel? If it were, say, 1 arcsec per pixel can you guide with an RMS of 0.5 arcsecs? If you can't there is no point in having the longer focal length with the given camera. Your guide RMS in arcsecs needs to be no more than half your image scale in arcsecs per pixel. If you want a simple answer, the wider FOV of the 100 is likely to be more productive than the higher resolution of the 120 but the key variables are guiding accuracy and pixel size. Olly
  2. And you'd be right. Something is not well here. Was the cooler active? Olly
  3. SCTs take a monumental hit on the used market because many people buy them (or bought them), expecting to use them for deep sky imaging. Rightly or wrongly (rightly in my view) they gave up on them for this purpose, so the market is saturated. The mounts are not well made, not very reliable over time and entirely useless for DS imaging. So I think that the "60% of new" baseline does not operate here. 25% would be very optimistic. It gives me no pleasure to say this. I've sold 8 inch and 10 inch Meade SCTs at great loss and have a 14 inch LX200 here at the moment. I like it very much indeed. It's a very enjoyable visual instrument but the visual market is very limited compared with the imaging market... Olly
  4. The results below give an insight into the imaging process. The final image had about 30 hours total exposure in Ha LRGB. If we start with the luminance only, this is what a simple log stretch on its own delivered. (This is a crop of just the central region since you're interested in sharp detail.) As you can see, it is quite soft and fuzzy. However, the final image ended up with a core like this: It's a lot sharper and more contrasty. So, as others have said, a great deal of the final detail is teased out in post-processing. It's not 'invented' in post processing. We can be sure of this because the same features are revealed time and again in high resolution images of this target. The aperture used for this was larger than yours at 140mm but was still a modest aperture. But here's the rub: you can only sharpen, stretch and otherwise aggressively process deep data with a good signal to noise ratio. If you over-process a weak data set it soon breaks down into noise. If you simply had a lot more of what you have already you could take it much further. HEQ5, small apo or imaging Newt, cooled CMOS astro camera. This could come in on budget. (On SGL being 'on budget' always means about 25% over the stated budget.🤣But we mean well...) Olly
  5. Is there a mono version? I don't know. In comparing my NAN and Trunk-to-Bat HaLRGB images with Yves' result I noticed two things: his Ha has actually gone deeper than mine and there is something about the way the Ha emerges from the gas and dust which is more progressive and natural. His camera has give a 'look' which I like very much indeed. And in 90 minutes per panel. Olly
  6. Carole, I've just spent a week working through 32 panels of Yves' capture of the Cepheus/Cygnus region. This was with his QHY full frame CMOS OSC camera. My feeling is that this camera has changed the game - big time. His data bears no resemblance to any OSC data I've ever worked with before and my finger is twitching over the 'spend' button... https://www.astrobin.com/full/g82xf7/B/?real=&mod= The Squid has OIII data added. The rest of the image is OSC, pure and simple. No added Ha. The times they are a'changin'... Olly
  7. The quick Photoshop answer is to add Ha to red in blend mode lighten. Don't add it as luminance at anything but a very small percentage and be careful with the colour underneath it. Olly
  8. Sure. Whether or not 'one flat fits all' is specific to the setup I'm sure. One test you can do is to apply one flat to another. (I do this in tutorials to demonstrate what flats actually do.) The result, of course, should be perfect flatness. Olly
  9. I routinely use L flats for everything, without issue, ninety-odd percent of the time. I don't find bunnies are generated by my filters. I also find they can last a lot longer than a month. However, my setups are permanent and all sealed. What about calibration? For CCD you can use a master bias as a dark for flats. WIth CMOS you will need 'same settings' darks for flats. There's a danger of over-correction if you don't calibrate them. Newt users might find issues with sky flats when light enters the rear of the scope. Olly
  10. Let's start here: Additionally, my current processing relies mostly on DeepSkyStacker and Photoshop. I have heard some horror stories about how much harder it is to process images from mono+filter cameras vs "one-shot" DLSR imaging. I like to think I'm a reasonably smart guy, with a scientific background, but how much steeper is the learning curve?? I don't want to spend the money on the great kit and then find out I can't actually use any of the data! With minimal knowledge of how to use a computer I went straight into mono-with-filters imaging and got results on the first night. I do my stacking and calibrating in AstroArt 5 (cheap, simple, fast and effective.) This is the procedure: stack your reds, your greens and your blues separately. Open them. Click to 'align all images.' Open Colour-Trichromy, tick White Balance, put the red file in the red box, the blue in the blue box etc and say go. That's your RGB. How can that be difficult? You then stack your luminance. My next two step are in Pixinsight but they can be done in Ps if you buy the plug-in Gradient Xterminator (which you should anyway if you don't have PI.) The idea is to balance colours and remove gradients which you have to do with OSC images as well. Now you process the RGB and luminance separately. This makes life much easier and is often done by OSC imagers who extract a synthetic luminance from their OSC file. The colour does not need detail. Really it needs strong colour and low noise and little else so it's a dead simple process. No sharpening, no need to worry about careless noise reduction, no need to extract fine details. Next the luminance. Because this was captured with all the light hitting all the pixels it is very strong in signal to noise so it is easy to extract faint signal and easy to sharpen the bright. Paste the L onto the RGB in blend mode luminance and that's it. In my experience it is easier to get a good image from LRGB than from OSC. Caveat: I just processed 32 panels for a mosaic from a new generation QSI CMOS OSC camera and this was nothing like any OSC camera I've ever used before. The data were sensational. However, if you are fighting LP and want to do pure narrowband then mono remains best. Regarding upgrades, a cooled astro camera will beat a DSLR. If you want to stay with OSC you can with a CMOS chip. If going for CCD I remain convinced that mono wins quite easily. Olly
  11. It's also possible to spray dense foam onto surfaces. A friend did this to a panel van he was converting into a motor home. It gave a coating a few mm thick, from memory. When I did a camper van conversion I glued camping mat to the inside of the van's panels and that was also quite effective (though there was more insulation after that.) I remember that the cost of the necessary glue was rather alarming! Olly
  12. Sorry but I can't download this file. Maybe too big for rural French internet! Olly
  13. Not only is it not cheating, it's the whole point of this game in my view! The art of image processing can be summed up as the management of the data's brightness range. Olly
  14. Vlaiv's point above hits the nail on the head. Forget about how much sky you have around your target galaxy. This has absolutely nothing to do with with the 'natural size' of the galaxy's image. By its natural size I mean its size on screen at 1:1. That's one camera pixel being given one screen pixel. If you take two cameras with the same sized pixels in the same scope, one camera having a huge full frame chip and the other a small astro-camera chip, the galaxy will be exactly the same size and have the same resolution in both images. You'll just have to crop out the spare sky if you don't want it there! Another way to think about it is to ask yourself how many pixels you are gong to put under the galaxy's projected image from the scope. In order to keep it simple and accurate think strictly in terms of arcseconds per pixel. This is the real unit of resolution and arises from focal length and pixel size. Your guiding, your seeing and your optical resolution will conspire to set a limit on this. You need some idea of what that limit might be, though it varies considerably with the seeing. The FWHM value (essentially a measure of star sizes on the chip) can easily double on a bad night. Personally I find that I'm usually over and out at roughly an arcsecond per pixel though I actually sample at 0.9"PP. That gives a satisfying scale on galaxies. Your guide RMS needs to be about 0.5 arcsecs for this. Good HEQ5 mounts can deliver this with careful setting up. I think your M101 is good. Stars are tight and round. With a DSLR I would suggest a large dither, about 12 pixels, to combat noise. This, combined with a large number of subs stacked using a sigma reject algorithm, will give you very nice data to work with. Olly
  15. Heh heh, the core of M31 is a barrel of laughs for the imager! I think that, in the end, all who try end up in about the same position. Welcome to the club! Further penetration is going to need either new cameras or a radically different approach in processing - or both. What you have is good. Olly
  16. You've massively black clipped the data so we don't have a view of everything captured by the camera but this is very strange. What's needed is a link to the linear stack. (Stacked with no processing.) Olly
  17. This is a problem with lightweight EQ mounts and partly explains why so many visual observers prefer the Dobsonian mount. It should be possible to fit a motorized focuser controlled by a small handset but this might be out of proprtion to the setup cost-wise. Olly
  18. Yves felt it would take more and it did. A medium size one is here: https://www.astrobin.com/full/g82xf7/B/?nc=user Olly PS: at no point in the processing did I apply any noise reduction. There was no need, even working with the gargantuan original dataset.
  19. No no no, until recently this road to Damascus hadn't been built. The old road was long, potholed, rutted and plagued by dripping tunnels full of noisesome goats. The new road sweeps smoothly on velvet tarmac six lanes wide... 😁lly
  20. Tom's already done this. In fact here he is looking (rightly) rather pleased with himself: https://astrophotography.ie/aboutme.htm 😁lly
  21. After finishing the processing of this project initiated and captured by Yves Van den Broek I thought I'd write down some notes in case they might be helpful. What I received from Yves: - 32 separate OSC panels stacked and calibrated in APP. - A 32 panel mosaic constructed in APP. This was a good template for a final image but was not at all satisfactory as an image, with gradients and joints visible and many of the panels badly black clipped. So... - The captures were done robotically using plate solving so the APP mosaic was almost oblong, barring the sawtooth edges. I used lens correction in Ps to make it perfectly oblong so as to reduce cropping at the end. The Crescent was in danger of exclusion! This would be my template for aligning the subs in Registar. - Pixinsight. I edge cropped each sub and did a quick DBE on all of them, with a touch of SCNR green on some. Typically I used no more than 4 to 6 background markers. All the subs flattened easily. - Photoshop CS3: I recorded as an action an initial modest stretch (levels-curves routine) and applied it to all subs. This makes them consistent and warns you of any problems they might have - though these were excellent. It's also super-fast. (The odd sub doesn't come out like the rest and needs a levels tweak to look like the rest but the system proved sound.) - Registar. The mosaic was 4 panels wide and 8 panels deep so I worked from the top down making strips of 4 across but as well as saving the combined strip of 4 I saved the 3 registered calibrated subs separately because... ...I took each strip of 4 into Ps to check it. Use the Image-Adjustments-Equalize function for these checks. This is a top tip. Joints, corners, bad blends stand out when equalized. If you have two joined panels called Overlap and Underlap there are two possible joint problems, the edge of Overlap and the edge of Underlap. If you have a line where Overlap ends, open the registered calibrated Underlap and paste on top. Using a feathered eraser remove it from everywhere except the line you're wanting to hide. Fortunately my super PC could do post processing routines from here onwards on the whole mosaic but the second and final stretch I also recorded as an action. That meant that if a problem appeared in the mosaic we could take a repair panel and give it the same exact stretch to speed up making a repair patch. And there you have it. If there's a program into which you can chuck 32 panels and expect it to knock out a seamless mosaic with well matched levels, colours and joints... I haven't found it! On mega mosaics you have to roll up your sleeves and help them. Olly
  22. These 32 panels from a full frame camera were captured by Yves Van den Broek (VdB on SGL) using his remote rig based here at my place. He stacked the frames and put them into APP, which made a good job of the mosaic geometry but not of the blending. He then gave me the files so I slightly altered the geometry of the APP mosaic to reduce the need for final cropping and then processed the 32 subs. I used the modified APP mosaic as a template in Registar, where I combined the subs over a few days. Further post processing was done in Ps with some existing data, including my Squid Nebula, added for good measure. Yves' rig: Tak FSQ106EDX with focal reducer, full frame QHY CMOS one shot colour camera and Mesu 200 in the Per Freyjvall observatory. A loud thanks to Yves for asking me to get involved with this. Cepheus is an astonishing constellation for the imager and discovering all its hidden secrets while working on this was fascinating. The scale of the dust structures beggars belief. Note that there is no filtered Ha in this image. This is like no OSC camera I've ever tried before. I want one! It's been a while since the days of regular collaborations with Yves and it's been fun. Olly
  23. That's a roaring success. Lovely, seamless job and a flowing composition made three dimensional by the Crescent. (By chance I've just been processing a mega mosaic myself with the Crescent just in the bottom corner. It's OSC data shot by Yves Van den Broek.) Olly
  24. Here's my position for the record: I do usually use a lum flat for everything and it nearly always works. On the rare occasions it doesn't I take flats per filter but that's very rare. This shows that dust bunnies are not generated by my filters but elsewhere - flattener or chip window. I can't speak for anyone else's setups but this seems to be the case for ours. Also the lum layer will illuminate the rest of an LRGB image by definition, so a fully flattened lum layer will go a long way towards flattening the RGB anyway. This implies that narrowband imagers might want to do flats per filter because they're not necessarily going to use a luminance layer - though some do use Ha as luminance. Also NB data tends to take a harder stretch and have higher contrasts. For all that, I usually flatten my NB data with a lum flat but I don't use it to make NB images so much as NB-enhanced LRGB. There are times when we cannot process all the data taken in one night here. A nice problem to have - but harmless shortcuts do have their place! Olly
  25. Do yourself a simple favour and orientate your camera along RA and Dec each night. Get the camera parallel with the C/W bar or dovetail by eye and then do a 10 second sub while slewing slowly in one axis. The resulting trails show the current camera angle. Get the trails horizontal or vertical. This will save you much unhappiness! Olly
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.