Jump to content

ollypenrice

Members
  • Posts

    38,264
  • Joined

  • Last visited

  • Days Won

    307

Everything posted by ollypenrice

  1. In your focally reduced scope the 450D is going to be working at about 0.7 arcseconds per pixel. You can't bin because it's one shot colour. This means you'll be massively over sampled and each pixel will be getting very little light, so your signal to noise ratio will be poor. And you have no hope of resolving real detail at 0.7"PP. Nothing, literally nothing, is going to make a DSLR the right camera to use at your focal length. It's important to get your head round why this is. But... ...but you can start with a DSLR, of course. It will be a very inefficient imaging system. The focal length will require very long exposure times and will crop your field of view without giving you any more real detail than a shorter focal length. You'll still get pictures, though. If I sound very negative it's because I just want to suggest that you ask yourself how much money you want to throw at a system which will never be a good one. A cheap DSLR to get you started, why not? But I wouldn't get drawn into buying an expensive one. What your long focal length needs is big pixels, not small ones. Big pixel cameras are going out of fashion but long focal length owners like you can go down the monochrome route, when the time comes, and bin their pixels 2x2 to make them, in effect, four times as big and more efficient. I don't think anyone so far has recommended Steve Richards' book: https://www.firstlightoptics.com/books/making-every-photon-count-steve-richards.html If most of what I've said sounds like gibberish (you wouldn't be the first to say so!) you'll find it very helpful. Olly
  2. Beware of internal reflections. One of our guests asked Yuri Petrunin if he was going to introduce a focal reducer for the 140. He replied, 'No, buy another telescope.' You have been warned!!! Olly
  3. Vlaiv's 'Aperture at resolution' seems to be aiming at the same information as my 'flux per pixel,' I'd have thought but I may be wrong. I suppose 'Flux per pixel' would be incredibly hard to ascertain from theory since things like vignetting and image circle will come into it and be all but impossible to quantify. Could we not just use a formula linking area of aperture with area of pixel? This would give a rough guide to pixel illumination. If we knew the light fall off heading outwards from the centre of the cone we could make it more precise. What a great website: Vlaiv's pixel illumination calculator. 👋 I've always been full of good ideas for other people. Olly
  4. One interesting test any of us could do would be to pre-process and post-process an image we already have but using only half the data. My prediction would be this: the half-data image, when shown at some fraction of full size, will look pretty similar to the full data image. It will break down into noise as we take it closer to full size. This would give us a practical feeling for the role of resizing in influencing perceived image quality. Olly
  5. Yes, that's what I'm saying. In fact I said it already way back in the thread! For me, '100%,' meaning one camera pixel for one screen pixel, is as large as I will ever present an image and even getting to that level is time consuming. Olly
  6. I think it does address the point because the reducer applies the light from the same aperture to fewer pixels, an increase in flux per pixel. Regarding your M1 example, this is pure 'F ratio myth' territory whether you subscribe to the myth or reject it. My contention is that you don't need the reducer. You could just work in Bin 2 or resample the native image downwards for a comparable result without the expense of the reducer, its more critical focus and its tendency, in some cases, to plague you with internal reflections, spacer distances, tilt, etc. The extreme example would be the Hyperstar. Some of the wording on their website implies that exposure times go from hours to seconds. So they might, but not of the same target! You can hardly compare M51 at F10 and F2 from the same aperture because at F2 it would be tiny and could clearly not be resampled upwards to the size of the F10 image.* Olly *Edit: Cannot be usefully resampled upwards as Vaiv says below.
  7. Adam and Vlaiv in particular, how about this caveat for the diagrams I posted? These diagrams assume workable sampling rates for the camera in all cases. If the longer focal length introduces over-sampling it will add no resolution and the imager would benefit from the reducer since it will add speed without reducing resolution. An alternative would be to bin the data when oversampled. I'm left wondering if we don't need a new unit. Arcseconds per pixel is fine for resolution but don't we really need something indicating flux per pixel? This might be indicated by something like square mm of aperture per pixel, no? Olly
  8. True! However, I run imaging courses and often work with beginners or relative beginners and I don't agree with the idea that you need to start with DSLR or OSC. I started with monochrome CCD on the advice of the very expert Ian King, now at FLO, and I think it was great advice. I pass it on. Also a fair number of our guests have told me that they think their time with DSLRs was time wasted. I know this is a minority opinion but I hold it none the less. Olly
  9. A curve which straightens early will protect your stars (but perhaps at the expense of contrast in brighter nebulosity.) Olly
  10. The Atik 460 tends to have noise in the form of overly dark pixels in the background sky and its star colour is - how can I say - not reliable or convincing and needs working on. It is also hard to stretch up the background sky without blowing out the rest. I end up settling for a darker sky than I prefer. The Moravian 8300 produces a colour balance at parity of exposure which needs considerable adjustment in post processing and the colour data seems curiously weak. I haven't been using it for all that long so it may be my fault, but I've done darks and flats absolutely by the book several times. It's also possible that the filters are not as they should be but they are in a size of which I have no others to swap round. Olly
  11. I think that the best thing - and it's a very important thing - is that you've critically appraised the image and doubted that you have the nebulosity in it. The most important thing any imager needs is the ability to study an image to see what's good and bad about it, at every step of the way. You've done that. Olly
  12. ...which looks great, for sure, but it has to work for real, with no unpredicted difficulties not contained in the numbers. That's the perennial bugbear. Both of the other cameras I use throw up things which make processing harder. I wish I knew why... Olly
  13. Possibly not, because some of my preference may arise from the fact that I learned my processing on the Atik 4000 and 11000 which are very similar beasts. Maybe I've settled into habits which are closely related to these cameras. How would I know? Olly Edit. 0.6 reducer for me? No, 3.5"PP is my limit.
  14. Yes, I'm not competent in all the ways of resampling because I don't do it, other than to compensate for JPEG artefacts when posting, but I really don't believe that there will be a serious difference between them. A slight difference, perhaps. We really need examples. And, yes, if you are oversampling you might as well bin. (Another reason for not using one shot colour!) My graphic efforts did not allow for this distinction but I prepared them for use in a discussion about the optical side of imaging. This might be a time to consider a few practical realities as well: - Some CCDs bin not at all, or bin badly. Beware. - Many experienced imagers do not find that Bin2 gives anything like a 4x reduction in exposure time in reality. - Very fast optics will usually gain time only after they have cost the imager many nights of wasted time. - Large pixel cameras are not available to the amateur with comparable sensitivity to small pixel cameras. - Mysteries. Why do I greatly prefer processing my Atik 11000 data over data from an Atik 460 and a Moravian 8300? Because I'm not fighting the data, particularly on colour. But why not? I don't know. I just know that it isn't up for sale or replacement. Olly
  15. You can use the reducer and crop, yes. Use it, crop and enlarge? No. Certainly not in my view. One might use a focal extender to improve resolution if aperture isn't the limiting factor in resolution, as it might not be in a fast, short FL scope. Or you can use a reducer to get a wider FOV. (You'll get a satisfactory S/N ratio faster on that wider FOV image as well but I argue that this is only worthwhile if you actually want all that FOV - as often you might.) I find that there isn't much I can't do with two scopes and two cameras, short of camera lens style widefield. My FLs are 530 and 1015mm. At 530mm and 3.5"PP I can do widefield. At 1015mm I can do high res imaging at 0.9"PP in one camera or semi-widefield, decent resolution imaging at 1.8"PP. We discussed the implications of 3.5"PP in a recent thread, Rodd. This is a crop from my Witch Head shown at full size. I can live with this sampling rate. Obviously I'd like four times as many pixels but I've got what I've got! Olly
  16. I don't think this is difficult. A key term, here, will be 'empty resolution.' If you upsize an image you make it bigger without increasing its resolution. A close double not quite split will remain not quite split however large the image is presented because the data does not contain the split.) The increase in size produces 'empty resolution.' So upsizing an image will always produce empty resolution. Imaging the same object at a longer focal length may or may not produce empty resolution. If the seeing and guiding don't support the image scale then the larger image will contain empty resolution. But if the seeing and guiding do support the longer focal length then the resolution will be genuine and will never be matched by the upsampled image. However, the S/N on the object of interest will be better in the smaller image. If upsampled how would it compare with the normally sized image at its normal size. I can't say I greatly care because it's hard enough getting an image to hold up at full size anyway, though many of my TEC140 images are presented close to that. I would never post an upsampled image. Olly
  17. This has always been my position on F ratio as well. A good while ago I posted this, which I think makes precisely the same point: With it I posted this: This ignores the effect of object flux per pixel and really needs updating to include it. However, I stand by the optical point it makes and, in practice, I still think that resampling the left hand object image (M33) down to the size of the M33 in the reduced image will produce results only trivially different from each other. Adam also said, 'There is no f-ratio myth, just people who don't understand optics.' I would argue (with a smile!) that the myth arises from failing to understand optics. (In one of my other areas of interest I'd also upbraid myself for the use of 'myth' when the right term would be 'fallacy' but the term 'F ratio myth' was out there so I used it.) I got involved in this debate because we very often see reducers advocated as providing benefits they do not, in fact, produce. As Adam says in the quotation with which I began, we must only compare different apertures at the same focal length. The fact that F stop and aperture are used as synonyms in the camera world is made possible because there is no change in FL. I think there is a myth and this is where it started. Olly
  18. Much better background. The gradient is almost gone but you could hit it still harder with an aggressive DBE I think. I know you want the outer glow of the Ring but not the brighter stars. You could try a custom stretch n Curves. Give the bottom a bigger lift than the top. Lift the bottom but bring it to a straight line from quite low down. The thing is that I can see nothing in the image which needs such a hard stretch in the upper brightnesses. Only the outer nebula and the little galaxy really need to be stretched hard. In PI the idea would be to mask the rest. No idea how to do that, though. In Ps you can just use layers as outlined above. Olly
  19. DBE. Simple as that. Unfortunately it has to be applied to the linear image so it involves going back to square one. After an edge crop it's the first thing I do in post processing. I follow Rogelio Bernal Andreo and Harry Page in applying a small number of carefully placed background markers, scrupulously avoiding anything but pure background sky. Give the linear data a very aggressive screen stretch (called STF in PI) so as not to put any markers on fuzzies, the glow around stars, or outlying nebulosity - particularly when processing galaxies which get bigger as you stretch. The advantage of fewer markers is that they let the software estimate broad changes in gradient rather than more local ones. Don't use DBE to correct star flare, for instance. When the gradient map appears look at it before applying it to ensure that it really is a broad gradient map has no local details in it whatever. In your image I'd try one marker in each corner and one half way along each side at the edges. Maybe two more, one each side of the ring but half way to the edge in each case. You don't want it to mess with that faint outer glow or to read that little spiral. I can't promise that this will be right but it would be my own first guess processing this data. Screen stretch the result image after the DBE map has been applied using subtraction. If it still doesn't have a neutral background increase the 'Tolerance' value and try again. Once you get to a stage where there's maybe just a bit of green gradient left, stop. Switch to SCNR green at a low value on the slider and increase it till you get rid of the green gradient. You want to apply it as lightly as you can. Remember that you don't need to get a perfectly neutral background if your screen stretch is far more extreme than your real stretch will be. Final head-banging stretches for the faintest detail I do in Ps using layers. I make a copy layer and work on the bottom, top invisible. In Curves, pin the background sky where it is (Cursor on the sky, Ctrl click) and fix it just below that with a point, then lift the curve just above the background marker. This will stretch everything above the background including stars but you then go to the top layer and erase it over the stretched items you want to see in the final image. (Galaxies, nebulosity etc.) But the top layer will not have the stretched stars in it. This is a method unsuitable for images with widespread nebulosity but it's good for galaxies, PNs and images with lots of starfield. Olly
  20. I think Vlaiv's point is that it's the same in terms of light collected per 'effective' pixel (an effective pixel being, say, a two by two binned array of four working as one.) Olly
  21. My FSQ85 was optically stunning though it wouldn't cover a full frame chip. I had no quibbles with it at all and it wasn't too sensitive to focus/temp variation. I also very much like my fluorite FSQ106N. It gives good star shapes on full frame but it does have quite a fall off in illumination at the corners, so flats are obligatory. They sort it out fine, though. It is much less temperature sensitive than the newer ED106 which is why I prefer it. I don't use robotic focus. (I don't like computers/softaware or extra USB leads and I don't fancy paying for three motor focus installations! One of setups is a dual rig, hence the three focusers.) Both the FSQs create the familiar Tak 'inverted lighthouse beam' effect on some bright stars but this has never bothered me. It does upset some people. Olly
  22. The data's clearly good and will support lots of processing routines and attempts. It would be a great data set to use for experiments in processing or on a processing course. I'd begin by looking at the colour gradient roughly right-left but a little different in distribution in the two renditions. In Photoshop it would take seconds to measure the background sky values in R and G and B at various points. I don't know how that's done in PI but a colour-neutral background sky is my first building block. I never take a second step till until I have that. In the first image a screen grab shows blue too low on the left hand side and way too low on the right. This can be hard to judge by eye so I always measure. Olly
  23. I wouldn't assume Takahashi would be better. I like Taks, having had the FSQ85 and now using an older FSQ106N Fluorite, but they are not perfect. Whatever the spot diagrams say, in practice the TEC140 with TEC flattener beats the Taks in the challenging blue channel. I've never had tighter stars in OIII than Ha in any scope. Always the opposite. Then again I've never had a good OIII filter. My Baader is so-so and my Astronommik is poor. Surely your issue is troubling you more out of curiosity than because it impacts on your final images? Olly
  24. Thanks Rodd. All is well and home tomorrow. Olly
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.