Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

ollypenrice

Members
  • Posts

    38,138
  • Joined

  • Last visited

  • Days Won

    304

Posts posted by ollypenrice

  1. 9 hours ago, AKB said:

    Ah yes, but not a lot bigger than the 25.4mm which you originally quoted for the 533. 

    I don’t have an axe to grind, I was just trying to correct diameter stated for that camera.  Sorry to the OP for going off topic!

    Tony

    Hi Tony, I haven't found anywhere which gives that as the diagonal. Here, for example:

    https://www.astroshop.eu/astronomical-cameras/zwo-camera-asi-533-mc-pro-color/p,64475#specifications

    Had the chip been as you thought then it would indeed have been insignificantly smaller than the APSc.

    Three loud cheers for Astroshop EU in the link above because they give the chip size in mm, x axis and y.  Why on earth is this vital information not stated clearly on every spec sheet? It is infuriating. I've been banging on about this for years. You need these dimensions for plugging into planetaria to give simulations of field of view. This is certainly something the OP should do before choosing any camera.

    Olly

    • Thanks 1
  2. 2 hours ago, AKB said:

    The diagonal measurement seems to be 15.97mm (according to FLO) so about 0.6 of an inch.

    Your point still stands that bigger is better, of course, but one inch is nearly as big as an ASI2600?

    Tony

    The diagonal of the 2600 is 28.3mm which, for me, seems an awful lot bigger than 15.97mm...

    I agree with the thrust of Adam J's analysis of budget and priorities.

    Regarding tying yourself to one manufacturer or system, I would say don't.  Leave yourself with as many options as possible for when software starts acting the goat and for when a new option appears on the market. Although this may not be an option for you, I can tell you as someone who hosts a number of remotely operated rigs that a basic desktop near the mount, with a lot of USB ports and no hubs or astro-dedicated computing hardware, is very, very reliable.

    Olly

     

     

  3. 20 minutes ago, Laurin Dave said:

    Hi Olly...  I'm sure you know what's just off the image.... but just in case ;) ..   When can we expect this?  (no need to wait for Mars and a comet)

    Dave

    Orion_Auriga_Taurus_25pc.thumb.jpg.d4d1e55fcd9d1691b7ef7cf09e7625d8.jpg

    Oh no! We have a widefield Flaming Star and a huge California-to M45 already... Madness! But Paul is mulling over something an order of magnitde madder than that...

    😁lly

    • Like 1
  4. 58 minutes ago, vlaiv said:

    I would like to point a few things out on this topic.

    - Fixed set of intensities mimics the way we see. We are capable of seeing larger intensity difference than our computer screens can show - but only because of anatomy of our eyes - we have iris that opens and closes depending on amount of light we see. If we fix size of our pupil - then there is small range of brightness that we can perceive at the same time.

    Having fixed range of brightness in processing is not limitation of image processing / display systems - it is limitation of our physique for the most part.

    - Problem is not showing very large intensity range in single image. Problem is with detail / structure. To be able to see detail and structure in something we need to distinguish enough intensity levels that compose those details and structure. Since we have limited "budget" of intensities - we can choose which part of large range will we show in fine detail / granularity. We can opt to show fine grained high intensity range, or medium intensity range or low intensity range. We can also opt to say show low and medium range but with less granularity - or maybe everything with the least granularity. This is processing choice.

    - There are tools that perform "high dynamic range" - but I would caution against using those. They will show both low intensity and high intensity regions in high fidelity. Or rather - tool will attempt to do this by using certain technique. Unfortunately - this technique is unnatural and in my opinion should be avoided.

    It relies on using non monotonic stretch of intensities - something our brain easily recognizes in everyday life as unnatural.

    Our brain does not feel cheated if we keep one thing in our processing - order of intensities. If something is brighter in real life and we keep it brighter in the image - all is good, but if we reverse intensities - our brain will start to rebel and say - this is unnatural.

    This translates into slope of curves in processing software - if we keep it monotonically rising from left to right - image will look natural (it may look forcefully stretched and bad in different sort of way - but it will look ok in domain that I'm discussing) - so:

    image.png.fc8c00789fb01efe8c7f6b38c93126e3.png

    that is ok - and we don't have immediate objection to the image but this:

    image.png.fd67976226a2513a02710a15aac4460a.png

    feels unnatural - note how curves first raises then falls then raises again  - it is not constantly rising when going left to right. This in turn produces some very unnatural looking regions of the image - like strange clouds, or parts of buildings that lack detail - look flat / gray.

    Slope of the curve in curves is level of detail in given intensity region - more it slopes - more detail. If it's flat - less detail.

    Basic stretch in astronomy images looks like this:

    image.png.4947afed0898e837195faf1fa61c307f.png

    and this is for a reason - we want to show detail in faint regions - left of curves diagram, so we set that part to steep slope and we don't care about detail in high intensity regions - so we leave that flat in right part.

    Here you can see that - in order to have two regions with steep slope - we need to go back down - and change direction of the slope - which is no good as it reverses intensities and confuses our brain (in effect - it creates a negative image superimposed on positive image - and we find negative image unnatural).

    You might be applying the same technique without realizing it - if you create two layers that you stretch differently and then selectively blend those - you might end up in above situation with reverse in slope.

    I'll try to find classic example of this in high dynamic range processing and post a screen shot.

    image.png.728a00392f5dc9070ed6ec20d96bf629.png

    Here it is - making central part of M42 with all the detail - while still maintaining detail in faint outer reaches. This is intensity inversion.

     

     

    I agree that HDR techniques usually produce an unnatural look and I only use them in astronomy in cases like M42, where the alternative is, to my eye, worse. If the camera itself cannot span the dynamic range there is no way to present it with any curve.

    In daytime photography I sometimes enjoy using HDR precisely because it does give an unnatural look. It makes the picture look like a picture, so to speak. It presents an object or view in a way that we don't normally see it and this can stimulate the eye-brain to look again and think again. I like to get it to the level where you think, 'There is something slightly odd about this...'

    spacer.png

    spacer.png

    Of course, you may not like it at all!

    Olly

  5. Just thinking aloud...

    One of the interesting challenges of widefield image processing is that the imager usually has a higher dynamic range in the target than is found in restricted fields of view. We are likely to find patches of obscuring dust which lie well below the background brightness, we have the clear background itself, then nuanced levels of reflecting dust and the full range of emission nebulosity. The dynamic range of the software does not, unfortunately, expand to allow for this. What is more, the imager will want to bring out very large-scale structures in the faint, usually dusty, signal and doing so consumes more of the available range. This is borne out by the fact that, when I add high res close-up data, the background of the original processing is almost always much darker than in the widefield.

    Olly

  6. 3 minutes ago, Mr H in Yorkshire said:

    Life is hard for perfectionists, as I was once counselled by a manager. Do what I have done, get in touch with you inner slob and relax.. 😉

    What bugs me is the inherent pretentiousness, though. Method. Two syllables. Methodology.  Five syllables. Three of these syllables are redundant and incorrect but hey, man, five syllables have more gravitas than three, right? And, get this, if someone calls you out on it, you can slag them off as a pedant. Win-win!

    Believe me, my inner slob has better things to do!!!

    :grin:lly

    • Like 1
  7. ( I can't reveal too much about our methodology as it is against the rules of the competition to display method or results prior to the exhibition :( )

    Of course you can't reveal too much about your methodology because the phrase doesn't make sense! Does a biology student reveal their 'biology?' What you are not allowed to reveal is your method. It shouldn't, but this use of 'methodology' to mean 'method' drives me raving nuts!!!

    :grin:lly

     

    • Like 1
  8. 34 minutes ago, glafnazur said:

    Truly an amazing image Olly. What I really like about it is that it puts everything into perspective, it shows where objects, that we know so well, are in relation to each other and their size. It always amazes me how big some of these objects are. 

    That's the fun of it, I think. You do need to bear in mind, though, that there are distortions of proportion inherent in rendering a curved reality onto a flat picture. Don't take relative sizes as gospel but as approximations.

    Olly

  9. 8 hours ago, Meluhanz said:

    There is plenty to learn. Frankly, all the reading did not teach me what I learnt in last ~7days.  Focal length, pixels, back focus, FOV, .......

    My first ever amateurish picture of M43 with all mismatched tool set, I know it is far from perfect but was happy to see this come to life.

    image.thumb.jpeg.9488f8b444509cca53c9c651a5adae70.jpeg

    Watching those first images come in is truly amazing.

    Olly

  10. 2 hours ago, tomato said:

    Just on the question of the Sy135’s (and the RASA8) amazing ability to capture this much signal in 2 hrs, I’m wondering just how much your dark sky contributes to this. I’ve just watched a video on the AIC about remote imaging in Chile, the sky at the site records a SQM of 22, from memory your location is not far from this?

    We hit SQM22 several times a year but it's not our default, so to speak. We regard anything less than 21.5 as sub standard but workable down to about 21, depending on the project.

    The only place I've ever imaged is here, so I really don't know how our site compares. I do think it would be hard to beat in mainland Europe. Two guys from the professional observatory to our south said they wish they could swap sites with me. :grin: They still managed to discover the first exo-planet, though.

    Olly

    • Thanks 1
  11. 20 hours ago, tomato said:

    Phew, relieved to see you didn’t pick up the Spaghetti  Nebula in all it’s glory or else I would be selling up now.😉

    A wondrous image, as always.

    Confession time: I hadn't realized that the frame included the Spaghetti.  On reading your comment, I tracked down an old horse-drawn computer on which I had 33x15 minutes of Canon 200L Ha data. Just the job! Here we go:

    spacer.png

    Big one here: https://ollypenrice.smugmug.com/Other/Emission-Nebulae/i-Tqb6kBP/A

    Olly

    • Like 10
  12. 9 hours ago, Steve Ward said:

    Another stunner sir ... :happy7:

     

    What's the dark nebula with 'eyes' in the top-right corner ? , it reminded me immediately of the Black rabbit from Watership Down .

    image.png.090083b0f5ff7a8db204f747abb5648e.png

    Paul spotted your question, Steve, and emailed me with this information:

    The nebula at the top right of O2M is a combination of several LDNs - 1534, 1532, 1528, 1527 and the lower "eye" is IC 2087.

    Olly

     

     

  13. 7 hours ago, Meluhanz said:

    I soo want to use the ASI 183 camera that i bought.

    The Star Adventurer Pro manual read that a max payload of 11 pounds, so my thought was it can load even my Nexstar 8SE to try it out along with the Orion StarMax 102mm.

    My last resort will be to use my Canon EOS 600D with my 18-55mm lens

    Unfortunately the 'go to' priority in internet mount discussions tends to be payload. It shouldn't be, the priority should be accuracy. This is what underlies Elp's post above.

    To understand this you need to understand the following; an imaging system has a resolution defined by how many arcseconds of sky it places on one pixel. Plenty of online calculators will give you this;

    https://astronomy.tools/calculators/ccd  

    You can increase resolution either by using a longer focal length or smaller pixels. The unit is arcsecs per pixel ("PP) *  The instability of the atmosphere (the 'seeing') usually means that going below 1"PP is a waste of time.

    Because pixels are getting smaller this limit is reached at increasingly short focal lengths.

    Now for what this has to do with your mount: in order to capture the resolution of the system in the real world, your mount needs to track with an error about half as large as your pixel scale. That is going to require a very small error. If your image system is working at 2"PP you need to track at no more than 1"PP error. The native error of an unguided mount like the EQ6 is between 20 and 30 times more than that. Ouch! That is why we use autoguiders. These will bring an EQ6 down to an error of between 0.5 and 1"PP.

    If you image at a scale finer than your mount's accuracy will allow, you might as well image with a shorter focal length. You will capture the same details and have a wider field of view. The Star Adventurer is designed for low resolution imaging. You may be within its payload but you are vastly beyond its tracking precision.

    Olly

    * In casual daytime photography discussions 'resolution' is often used with sloppy inaccuracy to mean pixel count on the chip. This is nonsense.

     

  14. The Star Adventurer is a light, portable mount intended to carry cameras and lenses.  Even if your reducer works, your focal length will be about 800mm, a thoroughly telescopic focal length requiring a thoroughly telescopic mount. Why not buy a used prime focus camera lens, an adapter to fit your camera, and try that on the Star Adventurer.? You cannot expect it to track at 800mm, unguided. My £6000 Mesu mount could not do that.

    Olly

  15. 22 minutes ago, Laurin Dave said:

    Marvellous image Olly..  I think you need to get it printed up and placed on the side wall of Les Granges!  Great detail at full res... spotted a little Crab, a trace of  Spaghetti plus countless more objects which this image places in context.

    Dave

    More pasta would be nice but, unfortunately, it really doesn't show well in OSC. You never can tell. Some Ha targets come out in a blaze of light but a few play hard to get and we don't have a NB option at present.

    I was curious about the substantial nebula below the Monkey Head. Large and bright, I had no idea what it was and had to ask Paul. It's Sh2-261 and it gets next to no attention. I found this nicely done, archived image on here and, again, the comments express the same surprise and unfamiliarity. I certainly think we should do it in the RASA and blend it in here, too. 

    Olly

    • Like 1
  16. I have used two Mesu friction drive mounts and consider them perfect. Whatever my budget, I would never choose another make, and I host two of the most obvious rivals in my robotic sheds.

    The Mesu has a two stage reduction and I think that the metallurgy is very important.

    If Mesus slip, I haven't noticed. My number one mount lost not one single sub to guiding error in over ten years of commercial use. I know that this is hard to believe but there it is.

    Olly

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.