Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

vlaiv

Members
  • Posts

    13,016
  • Joined

  • Last visited

  • Days Won

    11

Everything posted by vlaiv

  1. What sort of composition of Ha and OIII did you use? I can see two issues with right image that don't "appear" on left one. First is related to how human eye sees color, it is matter of perception. Not all colors when saturated to maximum have same "brightness" (we don't see them as equally bright). Here is image that can show you that: If you don't pay attention to numbers - and just look at two sets of circles you will notice that left set of circles looks equally bright as right one (only missing the color - left one looks like right image in monochrome). Numbers are percent of full brightness. So red color carries about half the brightness of same intensity white. When you stretch your data in mono / black&white (Ha data for example) and then apply red color to it - it will loose some of the perceived brightness because you are in fact just coloring it. That is the reason why left image looks more pronounced than the right, although respective intensities of white and red are equal. Another thing that does not look right with right image is color gradients. When you have mono image that is full of nebulosity - it is hard to tell that there is gradient. Our brain just interprets that as part of nebulosity. Once you add color in that - gradients become obvious as "out of place color" - that is another thing that is happening to the right image. You need to remove those gradients to get better looking image.
  2. Ah forgot to add. Depth of focus layering is also used in macro photography with fast lens - maybe look that up online to see if there is any free software that you can use? Often called focus stacking as well. Here is what quick search gives on this topic: https://www.cambridgeincolour.com/tutorials/focus-stacking.htm
  3. Seems that above link has enough info on the topic that I wanted to touch on - mind you, I know next to nothing on how it works with microscopes I'll just talk about resolution then. In my initial answer I mentioned that there is two ways of seeing image on the screen (there is actually whole ranges in between but those two are significant "points" on this scale) - 1:1 and screen size. If you view image in "screen size" - then resolved detail and size of objects will be related to two things - size of your screen (and it's pixel count resolution) and size of your sensor - that is physical size. Number of pixels on sensor will not matter as image will be either enlarged or reduced to fit the screen. "Zoom factor" is therefore function of device screen size. With resolved detail its a bit more complicated and depends on pixel resolution of screen and camera. If camera has less pixel resolution (number of megapixels) then screen - it will be limiting factor, but if it has more - then screen pixel count becomes limiting factor. This is all provided that microscope resolving power is greater or equal to what sensor can record (sampling rate - also mentioned above). Huh, this is becoming really complicated real fast . I was hoping to explain it in simple terms, but not sure if it can be done. I think we need to approach this from the other end. I'll again make number of points - it's sort of easier for me that way. 1. Target size and sensor size. With telescopes it is about mapping angles on the sky to the sensor - target is at "infinity". With microscope, target is at finite distance - and has some physical size. Sensor has some finite physical size as well. Magnification in this case will be ratio of two things - physical size of target (expressed in mm for example or um) and size of that image on sensor or in focal plane (again expressed in mm, um or whatever units of length). Let's say that you observe object that is 50um in size and you do that with x100 magnification. Image of that object in focal plane will be - 50um * x100 = 5000um = 5mm. It will be 5 millimeters long (diameter, side, whatever dimension we are talking about). If sensor is large enough (some sensors are for example: 7.4mm x 5mm, that would be ZWO ASI178 sensor) it will capture whole image. 2. pixel count of the sensor. Let's imagine that object has it's length 50um and that we observe it at x100 - then, it will cover 5mm on our 7.4mm wide sensor - there is plenty of room. How many pixels will that image be long? You can answer this question in two ways. First is to use camera pixel count resolution (mega pixels, or rather width x height spec), another is to use pixel size (or sometimes called pixel pitch). Let's use first approach - mentioned ASI178 has 3000 x 2000 pixels - we can use proportion to get how many pixels object will use up: 3000:7.4mm = X : 5mm (sensor x pixel count : sensor width = target pixel count : target width) => X = 3000 * 5 / 7.4 = 2000 pixels wide. Another approach is to use pixel size. This camera has 2.4um pixel size so size of target in pixels will be 5mm / 2.4um = 2083.333 These two differ a bit, so which one is correct. In fact - both are probably slightly off as manufacturers never give you precise sensor size nor pixel size, so in fact this sensor might be 7.3982 mm in width and pixel size could be 2.3925 or something like that - they always round things up, but that is good enough for rough estimation. We can say that image of our object will be 2000 pixels across (give or take). 3. Actual resolution / details in the image. This is very tricky one, and I have no clue how to go about it with microscopes. It is related to sharpness of objective and other things that I know nothing about with microscopes, but it limits how much detail you can have. You can't zoom to infinity and get sharp image - at some point there simply wont be any more detail in image even if you zoom more (same as with telescopes - there is maximum magnification that can be used). Given link as far as I can tell provides you with some guidelines on this. It will limit pixel size that you should use - any smaller pixel will just make object be larger in pixels (in number of pixels across - see point 2) but image will look blurry. 4. Now that we have our image with certain number of pixels - in our example it is 2000 pixels, we need to look at it on screen. How large will it be? That depends on how you present that image on the screen (viewing application). Let's say that we use tabled with 1280 x 720 resolution. If you use screen size and put whole image on screen, then object on the screen will take up same relative amount of space as it took up on sensor. Sensor size was 7.4mm and object image was 5mm (or sensor x pixel count was 3000 and object was roughly 2000 pixels) - so it took up 2/3 of sensor width. It will take 2/3 of screen width as well (this is why we call it screen size - sensor image is just mapped to screen so object sizes on image will be relative to size of the screen). Actual object size in pixels will be 1280 x 2/3 = ~853px. Although you have 6mp camera and you used quite a bit of zoom - you ended up observing object with only 853 pixels in this case. What if we use the same tablet but our viewing application allows us to use 100% zoom or 1:1 pixel mapping? Then you won't be able to fit whole object onto the screen as it is 2000px across while screen shows only 1280. You will only see a piece of object at one time - and you will be able to pan /scroll around to see all parts of it (but never whole). This is showing you the best "resolution" - or most detail (provided that system could resolve target to that level). There is range of zooms in between these two points (and actually further out on both sides) - so you can have more zoom out than screen size - image becomes smaller and applications usually fill rest with black frame or similar. You can zoom in more than 100% zoom. Stuff on screen will be larger - but there will be no additional detail, things will get blurry. Best is to keep zoom between 100% and screen size - that way you can see things at best resolution, fit whole object of interest on screen and also observe whole "field of view" (at screen size). Again, hope that above is understandable.
  4. Not sure what the question is, so I'll just shoot some facts and maybe those will cover your question and give you answer. If not we can expand on that. 1. MP count is related to number of pixels on sensor. 1.3MP stands for roughly 1300000 pixels. 1280 x 1024 - means count of pixels in each row and number of rows. Multiply that two figures and you get total number of pixels. 1280 * 1024 = 1310720 or roughly 1.3MP. On the other hand 1600x1200 is in no way 5MP camera as 1600 * 1200 = 1920000, or 1.9MP camera. For camera to be 5MP you need something like 2500 x 1900 or similar "resolution" 2. Resolution is such a broad term and is used in many contexts - that sometimes leads to confusion. One of usages is related to pixel count of sensor. More pixels on sensor means more "resolution". It is used in similar context for computer screens - larger resolution means more display pixels / dots on the screen (like hd ready resolution is lower than full hd being 1280x720 vs 1920x1080 and that is lower still than 4K resolution - 4096 x 2160). Another usage of the word of resolution is - how much detail there is in picture in the first place. Blurry image will have low resolution regardless the number of pixels used to represent that image. That can lead to funny construct like: "This low resolution image is recorded in high resolution" (first usage of resolution is level of detail - second is pixel count). In astronomy (or specifically astro photography) we have additional meaning of that word (could be related to microscopy as well) - resolution of the telescope / system (how much detail it can potentially reproduce at focal plane) and sampling resolution - which gives ratio of sky angles to pixels on sensor (after projection) - expressed in arc seconds per pixel. 3. Image can be viewed on computer screen in few different ways. One of those ways is 1:1 - or 100% zoom (sometimes referred native resolution - yet another usage of term resolution - btw native resolution can mean something else entirely ). This means that one image pixel corresponds to one screen pixel. Size of the portion of the image that can be shown like this is determined by screen resolution. If you view your image on 1920 x 1080 screen but the image is 1280 x 1024 - it will not take up whole screen. If image is on the other hand something like 3000 x 2000 and viewed 1:1 on 1920 x 1080 - you will only see 1920 x 1080 portion of the image and you will be able to "pan around". You can view image on computer screen in "another mode" - something often referred as "screen size". That is the size of the image adjusted so it fits as much of display screen it can. If image is smaller (in resolution - pix count) than screen resolution (again pix count) - it will be enlarged. If it is larger - it will be reduced in size. In any case - display resolution will be equal to screen resolution (both terms resolution mean pixel count). Hope above is not too confusing and it answers your question.
  5. HSO for me! Although I have nothing against green of SHO, second one is more appealing to me, and I have sense it is better processed as well.
  6. Can't help much there except to somewhat confirm your experience. Two months ago I did my first purchase with FLO. Got tracking code and everything and a month later - item arrived (hows that for speedy service) - of course it was sent to Bosnia first and then returned to UK only to be sent to Serbia after that (proper destination). This was one and only time I've purchased something from UK retailer, but not the only time I had that happen. My TS80 APO went on a trip to Romania once , and that gave me quite a scare, as such item can go out of collimation in transport, but luckily everything was fine and it reached me after three weeks. I purchased a number of items from TS in Germany and that was the only time shipment made a detour.
  7. Ok, it looks like I'm on to something in solving this puzzle. For a moment I thought that there can't be any other reasonable solution except that maybe random generated numbers are not random in true sense. It occurred to me that computers use pseudo random number generators - and these tend to be cyclic in their nature (although for most purposes they are indeed random). Maybe it happened that by pure chance I tested my algorithm on configuration of data that somehow exploits cyclic nature of PRND. To be sure I devised an experiment - it would consist of generating random noise image of 900x900 pixels and doing two approaches to generate same data set - one would be use of fractional binning and then splitting image in 4 sub sets and measuring one of them, while the other method would be to first split 900x900 image in 9 subsets (3x3 pattern) and then using sub sets (0,0), (0,1), (1,0) and (1,1) to do weighted summation. These should produce two identical image results (which could be verified by subtracting them and looking at the difference - which should be precisely 0 filled image if two are the same) - and in no way they should have different standard deviation. By examining difference I was hoping to get the idea what might be happening. I started off by generating 900x900 gaussian noise image and I did x1.5 bin on it. To verify that it produces x2 SNR improvement I ran the stats on it and - Result was stddev of 0.555! That is not the same thing that happened before! I repeated the test with another size just to make sure, and yes - again result of stddev was 0.555. How on earth did I get stddev of 0.5 last time I tried this? Then I remembered my hypothesis about cyclic nature of random generator and sure enough, I remembered that I used 512x512 test image in previous case where I had stddev of 0.5. So I tried it again - result was also 0.5 - then I realized important fact - 512 side image aligns rather well with 16bit value (a WORD in computing terms), but also with 32bit value. In case that PRND is using base 16 or base 32bits - it could lead to cycling and values would not be acting as true random noise. To repeat the findings, here are measurements of: 100x100, 200x200, 256x256 (again base 2 - 16 bit result), 400x400, 512x512, 800x800 and 1024x1024 all being binned x1.5 and measured for noise: Not quite what I expected, but I think it will bring me closer to solution 100, 400 and 1024 sides "bin properly", while 200, 512 and 800 - don't. 900 side also bins properly. It looks like those images that have side that has reminder of 2 when divided by 3 return wrong result. This points to implementation error rather than PRND issue I described above. Issue solved! It is due to "feature" of algorithm that I implemented that tests show this behavior. Because binned result will not always match original sides I implemented "offset" which I totally forgot about If for example we want to bin 7 pixels by x1.5, resulting image can have 4 pixels. One way it can be done is to use pixels 1,2,3 to produce first two pixels of output, and pixels 4,5,6 to produce next two and disregarding pixel 7 as extra. Or one could choose not to start at pixel boundary but half way "between" (starting at 1/3 of first pixel and finishing at 2/3 of last pixel) so that binned image is more centered on original image and no pixels are wasted . It also means that above reasoning with aligned pixels is only viable in some cases while not in other (where there is that offset).
  8. I've seen this effect number of times before and still have not figured out why it is happening. I know it is related to read noise - but not sure in which way. You say that each of your subs on its own is fine when stretched? How about aligned subs before stacking, do they show the pattern? It could be that this pattern is created from read noise by alignment process (if bilinear interpolation is used or something like that)?
  9. That one really bugs me. I simply can't figure out the difference. When we do fractional binning and split image - this should happen: We take only red squares (not perfectly aligned to edges - but it should be seen as taking whole top left pixel and respective areas of other 3 pixels). That is exactly the same as taking 4 subs and weighted adding them in 1:1/2:1/2:1/4 ratio. Yet two approaches differ in resulting standard deviation.
  10. No, I have not - I've done something similar to what you've done, only with images - created 4 images with gaussian noise and weighted added them in the same way one would create fractional binned pixel - and yes, that way it produces x1.8 SNR increase. Still unable to figure out why there is x2 SNR increase once I fractionally bin and split result so there is no correlation between pixel values.
  11. Just wanted to point out that dynamic range is not really something people should concern themselves with. In fact, neither is read noise provided that it is in reasonable range. Of course, it is better to have camera with lower read noise, but that is much more important with planetary imaging for example where exposure lengths are very short. With DSO imaging there will be small difference in camera with 1.3e read noise, and one with 2e read noise as total impact can be controlled with sub duration. Let's say that you want to image for total of 4h. If you use 2 minute exposures on 1.3e noise camera, what should be exposure length on 2e noise camera to have equal result (everything else being equal). That is rather easy to calculate. With 2 minute exposures you will have total of 120 exposures, so total read noise will be sqrt(120)*1.3 = ~14.24. If you want same total read noise from 2e noise camera, then you need to shoot (14.24 / 2)^2 = 50.7 exposures or let's round that up to 51 exposures - so your exposure length needs to be 240 / 51 = 4.7 minutes (this calculation is same as (2/1.3)^2 = or ratio of read noises squared). Due to nature of objects that we shoot and levels of light involved, we should not base our sub duration on saturation point of sensor, and yes for most targets in fact one might want to take a few short filler exposures - to avoid star cores clipping - that occurs almost always and not only on bright targets like M42 - don't see anything wrong with that (not shortcoming of the sensor but rather adoption of certain workflow). Hence, dynamic range is not really significant in DSO imaging. It is important in daytime photography - where we deal with single shots, and more dynamic range means more options in post processing (exposure correction, displaying things hidden in shadows, etc ...). In any case, I agree with you that this is interesting sensor, but I would not view it as a competitor to ASI183 - rather as a complement. It allows people to choose between sampling rates / resolutions for their particular setup without too much of a price difference for the same FOV.
  12. Not sure if your calcs are right. At 200 gain, e/ADU is about 0.25, right (maybe closer to 0.3). With 14bit ADC, you can get only about 5000e worth of signal, so not really 8K full well. Dynamic range is in fact about 12 (~5000/1.3 = 3780 log that with base two = 11.88) but not sure if that is meaningful in AP - since you get much larger dynamic range by stacking subs (and hence you can target your dynamic range).
  13. I'm not sure, it is interplay of scope curvature and "expected" curvature of flattener - it usually manifests itself in the corners of the image and can't be seen when you move towards the center - this is on sensors with diagonal of about 20-30mm - so affected area is usually 10+ mm from optical axis.
  14. In any case - I think that we can use your comparison as a good indicator of edge performance of eyepieces - all issues discussed above will probably have rather small impact at these magnifications.
  15. That is not going to help much - it's a 12mm eyepiece - means that field stop is about 17.1mm - that is only about 8.6mm from axis - about a third of a distance from field center compared to edge of 46mm field. Even if you introduce large distance error for flattener - that part of the field will not be significantly distorted.
  16. Not sure why would it be so? If you place camera lens at exit pupil - all light from EP will certainly hit camera lens without any issues.
  17. I don't think that distance will be much of an issue. It can only contribute to bit of spherical aberration, but with most of these eyepieces and their focal lengths it is too low impact to show on images. Field stop will be in focus - close focusing just moves focal plane further out but does not change focal plane of the eyepiece (which is ideally at field stop to make it sharp). On the other hand - use of field flattener that has exact operating distance could be an issue for edge performance of some of the eyepieces. You did get about good spacing for it - it should be located at about 128mm away from focal plane, and GSO diagonal adds 110mm of optical path + 15mm of optical path of SCT/M48 adapter that makes about 125mm total - let's call it close enough. This would be generally ok if all eyepieces had their focal point right at the shoulder - but eyepieces vary by quite a bit and some may need up to 10mm of adjustment either way - that would make field flattener introduce significant astigmatism - which would show as EP astigmatism.
  18. I was under impression that you were using DSLR and a lens. In case you have one - we can easily calculate focal length of lens that you would need to capture all eyepieces up to field stop. Let's take 100 degrees to be maximum AFOV currently available. If you use APS-C sized chip (most likely in consumer type DSLR) - that is about 28mm across. You would need something like 12mm lens to get it covered. Btw - what scope did you use it with? And what was the distance to rullers?
  19. I did not find anything wrong with GSO 32mm in F/6 8" dob, but then again did not look for it either. It served me fine for a long time, I still have it although I don't use it anymore because I replaced it with 28mm 68 degree ES. Later gives me better view because of higher magnification (makes sky darker in LP) and of course nicer 68 degree FOV vs ~50 of GSO. I also though about getting largest possible FOV for another scope (F/10 1000mm one) so I examined all available options. My initial idea was to use x0.67 reducer with F/10 scope and 28mm EP to get maximum TFOV - but that proved rather difficult as distance needed is 85mm - small enough for 2" diagonals, larger then 1.25" (and reducer needs 2" to be able to show that much sky). In any case - getting largest possible TFOV will require rather large field stop - around 46mm. Most eyepieces are reported soft around the edges - but that is no wonder because most scopes are not well corrected on such a large circle - most perform good up to 30mm or so - That makes me wonder if it is at all feasible to get large FOV on long focal length or if it is better to simply use another scope that has short focal length in the first place.
  20. In fact, surplus shed has loads of different lenses rather cheap
  21. How about this lens then - still very cheap: https://www.surplusshed.com/pages/item/L3465.html or maybe this: https://www.surplusshed.com/pages/item/L3710.html
  22. Quite cheap lens for collimation can be purchased here (not sure about quality, but it looks like lens for finders): https://tavcso.hu/en/productgroup/kieg_lencsek 50mm FL 182.8mm
  23. I think that is what was meant above. UHC filter is very nice observing tool - it helps you when observing certain types of nebulae and I think it's worth having. Barlow is also nice tool, but having had couple of them - in the end I decided that I like short focal length eyepieces more than barlow + EP combination. If you do end up getting barlow or telecentric amplifier lens - then you probably won't need anything below 10mm in EP collection unless you plan to use that particular eyepiece without barlow. Short focal length EP + barlow will give too much magnification. I'm guilty of using it like that - just as a "let's see what can be done" gimmick but never for regular observing. If you are looking for cheaper comfortable eyepieces in short focal lengths, then do have a look at these: https://www.firstlightoptics.com/skywatcher-eyepieces/skywatcher-uwa-planetary-eyepieces.html I had one of those and while I was not particularly impressed with optical quality - it was indeed better than stock eyepieces and served me well until I tried better / more expensive eyepieces. Later in discussion with other members I came to conclusion that I might have had rather poor sample. It was 7mm one. For short focal length EP without barlow I recommend that you stay above 5mm for the time being.
  24. Nice image. I would personally move white point a bit closer to the nebulosity - it will be a bit brighter with a bit more dynamic and stars will be sharper and pop out more. Here is example of what I'm talking to be clear (quick touch-up on the image you posted and cropped, hope you don't mind):
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.