Jump to content

ollypenrice

Members
  • Posts

    38,261
  • Joined

  • Last visited

  • Days Won

    307

Everything posted by ollypenrice

  1. I think a fixed workflow can work for the first few steps. In my case that would be edge crop, DBE or ABE and, sometimes, SCNR green. From my site that's all I need for a decent background sky and colour calibration but from a polluted site I suppose I'd need Background Neutralization and Colour Calibration as well. (Yes, all this is in Pixinsight! 😁) After that, I don't think one workflow fits all, which is where the ability to see what the picture needs comes in. Say I'm ready to start stretching an Ha image. If it's going to be a standalone Ha I'll use a gentler stretch for a more natural look. If the Ha is going to go into the red channel of an LRGB, however, I'll use a different and far more aggressive stretch with a huge initial lift. This will give me extreme contrasts which will look better once 'diluted' by the softer look of the red channel. A softer Ha stretch would be washed out and a bit lame when added to red. Another early decision also concerns the initial stretch: is this stretch going to cover the full dynamic range of the image or will I need separate stretches for the brightest parts, to be blended later? This affects the stretch to be used. Yet another early decision; are the stars going to be a problem? If we have a dense field of bright stars we'll need to plan ahead and either try to mask the stars slightly (always difficult) or remove them altogether with a view to replacing them later. Etc!!!! Olly
  2. Might it be worth putting some time into understanding the principles of the nonlinear tools rather than experimenting with them practically? I know there's a temptation to open an image and push sliders this way and that to see if they can give us something we like, but this can't be the way to do it. (Lots of U-tube gurus seem to think it is, of course. 'Flounder along with me,' seems to be their motto.) One thing I would say about processing, though, is that with only a few exceptions it is best to take small steps. Big improvements, for me, are made from lots of tiny improvements. Olly
  3. I think there are two things to master in processing, whatever you use to do it in, one of them pretty obvious and the other less so. 1 - Our processing software contains a bag of pixel-modifying tools. We need to know what they are and how to use them. That's the obvious bit, though thinking about the program that way might help to bring understanding. 2 - Much less obviously, we need to become skilled in looking at our pictures. This ability is one of the big things we learn as we gain experience. As beginners we'll post images with defects that we haven't fixed, not because we couldn't have fixed them but because we haven't seen them. At one time I had a checklist to help with not missing things because I hadn't seen them. (Background sky colour? Background sky brightness? Star colour? Green cast? Faintest signal fully exploited? Histogram goes all the way from black point to white point? Can the data be pushed still further?) Olly
  4. 🤣 Get excited by a telescope you've looked through and you know you like. Beware of preconceptions when internet dating!!!!!!!!!!!!!!!!!!!!! Olly
  5. In Ps I'm offered the chance to expand a selection by x pixels and then feather it by whatever I like, x or another value. I don't need to be a mathematical genius to understand that, if I expand it by 100 pixels and then feather it by 100 pixels the selection will be expanded and faded by that amount. What is more, when I ask Ps to expand by 100 pixels the selection's expansion is shown on the image in real time so I can see what it means. That's perfect: if it's too big I just reduce it, or vice versa. The big numbers and small numbers you mention are presented visually on the screen so I can see what they mean. Photoshop's genius lies in its communicative power. I think that PI's user interface could be edited by someone who understood communication but, as it stands, it is a communication disaster. It is inexcusably bad and, frankly, strikes me as being proud of the fact. PI's explanations are absolutely perfect for those who already understand what it is doing and don't need it explaining. In this respect it closely resembles the kind of explanations which abound in the autistic world of IT. Olly
  6. My advice would be, 'Don't get too excited till you have looked through one.' Telescopes are full of words; apochromatic, doublet, triplet, Petzval, fluorite... Great things, words (says a former English teacher) but you can't look through them... Olly
  7. 🤣 We could make a film called Carry on Processing. Barbara Windsor minces in and Kenneth Williams says, Oooo, my spline's interpolating now and my continuity order's popped right off the scale. And don't mention my smoothing parameter! Olly
  8. I'm always intrigued by these comparisons between the relative complexity of Photoshop and Pixinsight and there is no right or wrong answer since, if you find one easier than the other, then that's it, you just do. I knew nothing whatever about image processing or any kind of digital photography when I started taking astrophotos but, back then, post processing really was an almost universally 'Photoshop activity.' Nearly all the available tutorials were for Ps, as were the bought-in actions. I think the reason for the irresistible rise of Photoshop (which is now a commonly used verb in anglophone countries) derives from its user interface which is largely based on metaphors drawn from film photography and printing. Unsharp masking, dodge and burn were darkroom techniques, layers come from printing, the eraser from draughtsmanship and so on. These metaphors clicked with me, intellectually, and made me feel at home - though a little overwhelmed at first. However, the consistency of the underlying logic was reassuring. Compare that with this randomly chosen bit of Pixinsight menu: What does this mean? Continuity order of 2? If you understand the mathematics behind image manipulation then fine, this will speak to you. But how many imagers are in this position? So to whom do the authors of this menu think they are talking? If they are well entrenched on the Asperger's continuum they won't care... However, terms like opacity, feather, erase, select, minimize, maximize, etc etc, though used metaphorically to describe mathematical manipulations, make intuitive sense to me and create an analogue processing experience. Olly
  9. And there we have the crux of the Pixinsight-Photoshop debate. Masking versus layers and selection. Give me layers and selection any day, but that's quite possibly because I know how to get what I want out of them. I wonder how much the movement away from Photoshop stems from their changed business model, making rental compulsory. Pixinsight did try this on at one stage but I suspect that their legal advisers reminded them that their initial contracts promised free updates for life. For what it's worth I have both Ps CS3 and the cloud rental version and know of nothing I need for AP that isn't there in both. My main reason for having the rental is to have Lightroom, which I don't use for AP. Olly
  10. Yes, I can understand not wanting automated multi-intervention routines designed which take several steps at once. However intelligently they guess at what you need, they still guess and I never want that. Then again, I like processing and am never in a hurry with it. One step at a time, for me. Olly
  11. To my mind the key issue in NR is not how you blur pixels but where and by how much. To my eye your noise reduced images do look noise reduced, meaning they've been given too much for my taste, but only by a bit. Clearly the background is much improved. Even once we look away from the background to the fainter nebulosity in the original, we see that it has no need of NR and has suffered from its application. This is why I like Photoshop and layers; I can NR a bottom layer and then use a variety of selection tools to pick out the bits which need a lot, a little and none and use the eraser at various opacities to grade the application accordingly. Olly
  12. I'm amazed that you'd find Pixinsight easier than...well...anything. However, those who become very expert in its use can match the results from Photoshop. Olly
  13. Are these pictures you have taken or have you been sent them or found them online? I ask because the 'Orion LX200' text really does look, to me, as if it's been photoshopped from a flat surface onto the curved surface of the tube. It looks very dodgy to me but I'm happy to be corrected. Olly
  14. Be very careful with trailers. Many of them bounce around like crazy and are phenomenally destructive of contents, particularly if the unsprung to sprung weight ratio is small, as it almost certainly will be with a light trailer and light payload. In the States people do use trailers for big Dobs but my experience of small trailers (which is fairly extensive) would tell me to beware... Olly
  15. I once bought a refractor at a service station on the M1! I took along the bits needed to set up the classic 'illuminated ballbearing' artifical star test. The test was good so I bought the scope. It was fine - though it wasn't a premium instrument at a premium price. Olly
  16. Without having tried this I wouldn't like to say, but I can see your point. The trouble is, though, that you're not (I don't think) going to get much broadband blue data, meaning reflection components which appear in many nebulae will be blocked. I'd be more inclined to try two colour cameras, one with a multi band filter and one without. I'd then explore different ways of combining them, maybe doing a roughly 50-50 average as a first pass then adding the filtered data at full power but in blend mode lighten. Without hands on experience this remains a (moderately) educated guess. 😄 Olly
  17. As Steve says, you can use Ha or a blend as a luminance channel. However the L RGB scenario and the NB false colour scenario are not equivalent. The L channel catches the same information as the RGB but without the colour differentiation. That's to say, the L channel is made of red+green+blue which means it has the same components as RGB. It has no effect on the colour balance when done properly but obtains more signal in less time. In NB imaging the three channels contain different information, which is the whole point. There is no filter (yet!) which can isolate and capture Ha, OIII and SII at the same time but without colour differentiation. If there were, it would be the NB equivalent of the Luminance filter and would save time in the same way. In scientific false colour imaging you would not use the strongest channel (usually Ha) for luminance because it would distort the colour balance and, therefore, corrupt the information about gas abundances in the image. (The equivalent objection applies to using Ha as luminance in broadband imaging, which is why I apply Ha to the red channel.) What you can do honestly in NB is process a copy layer of the correctly weighted and blended three channels as a luminance layer and apply it over the false colour layer. However, the difference is that you have no more signal to play with in this synthetic lum layer than you do in the false colour layer. In L RGB you do have more signal in L, which is why it's so effective. Olly
  18. Because the primary goal of any William Optics telescope is to look pretty on the outside and cost as little as possible? Regarding the seller, I wonder if a solicitor's letter would help to focus his mind? Olly
  19. Not good. I live in an area of high fire risk and it remains a possibility that a holidaymaker, unaware of this, might launch one of these things. However, the risk is trivial next to battalions of halfwits playing with their telephones when they are supposed to be driving. Olly
  20. I was showing some non-astronomical friends a collection of deep sky images last week and it struck me just how astonishing our technology really is. They were amazed that these came from small amateur instruments. Severe trails vanish in Astro Art sigma stacking with as few as a dozen subs. They are a non-problem with CMOS-quantities of short subs. Olly
  21. Chinese lanterns (paper hot air balloons) are often offered as an explanation. Being large and light they respond quickly to changes in wind direction, drafts etc. Launching an uncontrollable burning object into the sky; what could possibly go wrong? Olly
  22. You can make them by using a compass cutter (from graphics outlets) to cut a 'washer' from a piece of black card. You can then make the aperture anything you like. This has worked fine for me. Alternatively, camera retailers do sell step down rings. https://www.bhphotovideo.com/explora/photography/hands-review/wisdom-step-and-step-down-rings Olly
  23. Ah right, I thought that you were finding a sudden change in LP after the flip. If it's a gradual change as the object moves west then, sure, eastern LP will diminish. Olly
  24. You're not trying to take a pretty flat! The idea is that the flat capture the irregularities in the illumination of your system. What you are showing, here, is not a remarkably irregular flat but a remarkably regular one. If you keep getting similar irregularities that just means tat they are there, which is what you want your flat to deal with. Over correction is a problem sometimes and not always easy to fix. Are you dark-subtracting your flats as you stack them? (Use darks with the same settings as used to make the flats but with no liggt getting in. Blocking all light can be very difficult. Or you could try using a master bias as a dark for your flats. Olly
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.