Jump to content

vlaiv

Members
  • Posts

    13,263
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. You do realize that you are referring to this same thread?
  2. I'll hazard a guess that it is more accurate than eyeball Mark I . In principle, camera + lens + processor is quite enough for SQM meter, and iPhones tend to be rather uniform as there is very small number of models (unlike for example android phones) - which means that calibration information is easily bundled with the app. I think it is pretty accurate - at least to one digit - like sqm 21.1 if not two digits.
  3. It certainly is, but it tends to be true only for faint(er) objects. When object and sky are equally bright - then their shot noise is comparable, but when object is much brighter or much darker than the sky - then shot noise of either object or sky tend to dominate over other noise source (noises add in quadrature). This creates two distinct regimes - if object is brighter than the sky - there is no major difference in change in sky brightness. It is the reason why bright objects like M31 and M42 and some globulars can be imaged from high LP in reasonable amount of time - but try to capture something really faint like IF - you'll need weeks of exposure.
  4. I'm not sure if SQM is that expensive. For example - it can be done with Arduino (just light sensor, Arduino nano and small 3-4 digit LCD digit display needed). https://sourceforge.net/projects/arduinomysqmskyqualitymeter/ Sure, original thing is, but that is mostly the cost of labor and not of material.
  5. AzGTI? As long as mount can track reasonably well and keep planet in FOV - you can image with it. Only drawback of using AltAZ mount over EQ one is imaging the moon in multiple panels. If you use alt az mount - panels will end up slightly rotated, but if your stitching app can handle that - then all is fine. You do have to be careful when planning mosaic so you don't end up with "cracks" between panels.
  6. I know of this comparison - maybe not as easy to read? http://www.darkskiesawareness.org/img/sky-brightness-nomogram.gif (sgl no longer allows embedding of http images so I'll post screen shot as well): However, lightpollutionmap.info is similar to wiki - only 21.99 is shown as class 1
  7. I guess maybe easiest way to tell is to blink images that are further in time (not two adjacent subs - but say first and fifth that show the feature).
  8. It really looks like star spike - and for the moment I thought it is bright star just outside the frame. Owl is in Big dipper and for a moment I thought it was Merak or some other bright star - but no, you are right - nothing major in that position in the sky. Next idea was - aircraft flyby at low altitude and blinking lights - there are two spikes by the way - larger one and smaller one - so I thought main light / tail light or those lights on wing tips - but no, you have it in multiple subs. However - it is much wider than other star spikes in the image - that can mean two things: 1. object is not stellar / pinpoint - it has dimensions 2. It did not move together with stars - it did stay in roughly the same place - but not exactly the same place. Have you tried to see frame to frame difference in this phenomenon? We might get a clue if it moves or changes from frame to frame.
  9. In astrophotography, you said it yourself - we don't have illuminant in cosmic scene - we have only light reaching us. These are only important as far as defining the standard / transform from XYZ and viewing conditions - not important to recorded signal. Once you have recorded signal - you then just use defined standard to encode it in sRGB because it will be viewed with device that is suppose to work on sRGB standard (or if it is calibrated differently it will manage color transform in profile). You are under impression that we are supposed to do white balance. We are not. White balance is only needed if you want to match perceptual side of things - which we don't since there is no match for original scene. Imagine you have very light gray wall and all the other colors in the scene that you are viewing (not imaging) are rich and saturated colors. We will perceive wall as being white - it will become our perceptual white point. Now take piece of white paper and put it against the wall - your perception will instantly change - white piece of paper is whiter than the wall and it will become your white point - and the perception of the wall will turn light gray. Anyone can do above experiment. Did we change physical color associated with the wall? No. If we record it and later "replay" it - it will be same power spectrum - nothing changed about physical nature of that light. I'm not trying to capture perceptual color - we can't do that as we have no profile for "floating in outer space with robot eyes capable of collecting vast amounts of light and looking at that galaxy" - I'm just trying to capture physical nature of the light. If we capture starlight in the box and let it shine so we can view that starlight and set it on desk next to our computer screen and we produce same color via method that I'm describing on the screen - people looking at those two will say: These two have same color. That is the goal of all of this - color matching not trying to record perceptual thing. Both forward and backward RGB to XYZ are mathematically related as being inverse of one another so same rules certainly apply for both. As for actual definition: Only requirement is that XYZ values are scaled so that D65 has Y of 1. This is because sRGB is relative color space (all intensities are measured against white and black points as percentage rather than absolute boundless values). Other than that - there is no mention of illuminant in forward transform - nor is there similarly mention of illuminant in backward transform. That is correct - we are not capturing what observer experienced - no one can do that except for observer to write an essay on that topic or maybe some strange neurological electrode probe thingy (what can be captured in our brain is really beyond me) and that is not what the documentary photography is all about. In fact - quick online search on the definition of documentary photography gives this: If you put two people in the room - you might easily get two different experiences of the scene - which one is accurate? However, if you put two cameras in the same room and process the data in proper way - you should get the same result (or very similar with maybe some colors being slightly off - depending on each sensor characteristics / gamut of the camera). Oh it is 100% true - and easily proven by math. Here we have expression for XYZ that depends on matching functions x_bar, y_bar, z_bar and L - spectrum of recorded light (regardless if it is emitted from the star or reflected of the object). If we have sensor that has 100% the same spectral response of its three raw components as x_bar, y_bar and z_bar - then both XYZ and RAW will produce same values for same L - spectrum. You can't have same expressions with same numbers produce different results. Ok? Since we now have bunch of triplets XYZ on one side and RAW on the other side - that are identical (we have different XYZ values for different spectra and different RAW values for different spectra but XYZ is equal to corresponding RAW). There is only one "number" that has property of: 1 * N = 1 34 * N = 34 10001 * N = 10001 I put "number" in quotes because here we are talking about multiplication matrix rather than number. In any case - solution is always unitary matrix - regardless of what set of spectra (L) you are using to derive the matrix. I made this point to show that any set of spectra can be used to derive transform matrix - only difference between set 1 and set 2 will be in error they produce. If sensor QE matches XYZ standard observer matching functions then error will be 0 in any case. More sensor matches XYZ curves - less you have to worry about choice of spectral set used to derive transform. Again that is wrong on the part of that author. There is no illuminant in deriving XYZ color of a star - look up previous screen shot - XYZ values are integrals of matching function times power spectrum of light that we want to record. That is it - no illuminant is ever mentioned. XYZ is absolute color space used to record color of light. People often confuse this because another description is given: Which simply means - we take a light source and we shine it on object - or thru filter and resulting spectrum is given by S * I. This is basic physics - and has nothing to do with XYZ calculations - it simply states that reflected or transmitted spectrum will have original frequencies attenuated by reflectance / transmittance coefficients of object / filter. When recording XYZ - it does not matter which light source was used to shine upon object - process is the same. XYZ values will be different - because light that reaches sensor will be different - but process is the same and does not depend on "illuminant" in any way. While that is admirable, I must say it's not worth much if one does not do proper color calibration first. What good is all that if the color you are working is not correct?
  10. Really nice image and yes, I think you should see about that focal reducer and keep the practice of bin x2 as well.
  11. Hi and welcome to SGL. In principle - yes, but it could come in handy for some people. Goto is useful in number of situations - when you don't want to search / star hop to your objects for example. It is also tracking mount - so it is handy for high magnification as it tracks the objects in the sky to counteract for earth's rotation. That is mostly useful for planetary and high magnification observing as at those magnifications - targets drift out of view in less than a minute or so - and you constantly need to adjust the scope pointing. ST80 is low power scope. It is also wide field scope. These two properties make manual use of it much more comfortable. Wide field of view makes it easier to find objects as you look at large portion of sky at once. Low magnification means that you can observe for several minutes before you need to do adjustment to scope pointing. It is really down to preference - if you want your scope to find objects for you and you feel that tracking will be beneficial - then yes - get goto mount. Something like AzGTI will hold ST80 very nicely and it will still be light weight and portable. By the way - you can have tracking mount without goto - for example EQ mount with RA motor - it lets you observe at high magnifications (like lunar and planetary) for longer periods of time without need to adjust position - recommended for relaxed planetary observing at high power (again - not something that ST80 will need given that you won't be pushing more than around x80 with it most of the time)
  12. Maybe consider even a smaller jump form binoculars to Maksutov of 102mm of aperture. It is even lighter than 127mm thus more portable - needs less cool down time. In AzGTI combination - light enough to be picked up and carried assembled in one hand. Still able to provide some amazing views of moon and planets.
  13. Indeed. I think it is good idea to always put focuser in the same position - like you do. That way you take one variable out of the equation (and also helps with DEC balance).
  14. Ah, ok. So you have your camera in "up/down" position - indeed that is easier to balance in DEC as camera is "parallel" to CW shaft. However - I do believe that rotating OTA rotates the view as well and if you mount scope so that focuser is at 90° to CW shaft - camera FOV will also rotate by 90°
  15. Does it matter how the tube is rotated or not? Do you have your camera in up/down or left/right position - or maybe at arbitrary angle?
  16. For planetary - sure, for that you need camera with following characteristics: - high QE - low read noise - fast FPS rates Pixel size is not as important as you can aim for optimum F/ratio by using barlow lens. In that sense ASI120 is worse choice than ASI178 as later has better read noise and higher QE and faster FPS. But again - there is ASI224 that is better from both of these with respect to said parameters (and is one of the best planetary cameras out there). However, things like M51 will be out of reach for such cameras unless you spend very long hours and use large scope. Here is what M51 looks like with 8" scope and ASI185 - which is larger, more sensitive and less noisy than ASI120 As a comparison, here is another M51 (luminance only) - with 8" scope taken in 2 hours of total integration I believe (certainly not longer than that) - but with serious DSO camera - ASI1600: From these two images, you can see what difference camera makes - and you want to use worse camera from these (for dso imaging) and put it on scope that gathers x4 less light. That simply won't work well. Before you decide to get "serious" scope with 2000+ FL - do take time to understand sampling rate and how it affects image quality. Long focal length scopes can be used to produce really good images - above lum only image was taken with 1600mm FL scope - but you need to pair such scope with appropriate camera and handle data in certain way. Bottom line - yes, get small camera for planetary with Mak - it's a lot of fun and great learning experience. You can use same camera on larger scopes afterwards and you can use it to guide later. You can even try to do DSO imaging with it - just don't expect much. This for example was taken with ASI120 like camera (QHY5LIIc - same sensor) and 8" scope: And again, same aperture size - 8" but proper DSO camera:
  17. I'm sorry if my answer caused confusion. As far as skymax 102 and 120mc combination is concerned - I think it will be very good performer and I think that you can get similar images to one I took and provided links to threads. As far as Skymax102 and 120mc is concerned in role of DSO imaging - well no. It will not work well. You will be very oversampling and if you reduce sampling rate by binning - you'll get tiny images. ASI120 and 1300mm of FL gives you sampling rate of 0.59"/px - which is fine for planetary - but at least 3 times higher than you need for DSO imaging with 100mm of aperture on EQ5 where you can hope to achieve 1.8"/px. This means that you'll need to bin x3 and that will make your image tiny 400x300 or something like that. You can certainly try. I've tried with 8" F/6 and ASI120 equivalent on HEQ5 mount and I got something like this: But that is x4 aperture at similar focal length and it's not looking all that nice (too much sharpening and denoising) - but M57 is still tiny.
  18. There is no connection really. I think that easiest way to establish rotation is to do test shot on bright star and slew scope while taking a shot. This will create a trail on sensor that you can examine. Say you slew in RA - then star will make a line that is RA line in the sky. That can be your reference and you can rotate camera with respect to that. I know how to orient sensor in "straight thru" designs - like refractor or RC/MCT/SCT (where camera comes at the end of the scope and faces the same way as aperture) - as you can easily orient sensor with RA or DEC - by just looking at the mount. With newtonian - you have more reflections and things are perpendicular. There is also OTA/focuser rotation with respect to mount / rings that can change - and that rotates FOV, so I'm not sure if there is a easy way to determine sensor orientation by just looking at it. Most people make focuser either "horizontal" or "vertical" with respect to the mount (when scope is in home position) - I guess that makes things easier, but I'm still not sure if I could figure out which way camera is oriented without piece of paper to trace the light.
  19. In all likelihood. Thanks for posting that - I did not know they made such kit.
  20. Will certainly do. I'm really interested to see the whole modification and how you managed to make it all.
  21. Could be - but histogram of green shows distinct signature of stretch. OP also images with ASI183 and filters / LRGB and DSLR also. I don't think that error here is in 8bit or jpeg format at capture time. What could be is issue with DSS. I've experienced that, some of my images, from DSLR so not sure if it is applicable here - were clipped at red part of the spectrum. I think it has something to do with the way DSS decodes / handles raw files. My flats were not working properly because of that and I had red channel completely clipped. As soon as I used another software (FitsWork) to extract raw data in fits format and did all my processing in ImageJ - everything was fine. Fact that blue and red are similarly clipped here - reminds me of the thing I encountered.
  22. If I'm not mistaken - SkyMax102 is Maksutov 102mm from Sky Watcher and is perfectly suitable for planetary image - but not as much for dso imaging. Your DSLR is probably going to fare better with maksutov scope than any other cameras you listed. As I see it - you have really two options (not mutually exclusive). 1. Use DSLR with Maksutov and do rather long exposures - like at least couple of minutes each. And long total imaging times. Learn how to process DSLR images in special way (which includes binning) and prepare for the fact that you won't be able to get large images of small objects. When imaging galaxies and planetary nebulae - it is about seeing and mount performance that determines actual resolution of the image. You can make galaxy as large as you wish - but you can't get the detail needed and it will always look blurry that large. 2. Get camera like ASI178mc for use as planetary and EEVA camera. You'll need some additional gear for EEVA to fully exploit that camera. Mak is going to produce very nice images of planets and the Moon with said camera. Check out these: For EEVA style you'll need 32mm Plossl, 12mm C-mount lens and way to do eyepiece projection. I'm planning to do something like that with my Maksutov, but I have not yet got to it. I did however some tests on the topic, so you can read about it here: You can also find couple of threads on EP projection in EEVA section that might explain things a bit better.
  23. It has been suggested a number of times, but it is down to the OP to do so if he wishes / has the time. You are quite right that this highly technical and rather heated discussion might not be helping OP much - but I don't think it is irrelevant. I apologize if the tone of discussion caused discomfort to some readers - that certainly was not my intention, and I can only hope that it will indeed provide some value to people that are interested in the topic. In any case, I would notice that if there is no proper color in astronomical images - original inquiry by OP does not really make much sense - there is no way to get the colors right, or in another words - we may as well keep the green in the image - because there is no accurate color in astro images anyway, right?
  24. Even with belt mod - there is a chance of backlash. If worm gear is not sufficiently round it won't have good contact with worm in one part of the cycle. When is the case - depending on where worm adjustment is made along the worm gear circumference - two things can happen: - you can have proper meshing on one part of circle and backlash on the other or - proper mashing on one part and binding on the other Binding is usually first seen and worm adjusted to avoid it - but that leaves some of the backlash. This happens as worm is static against worm gear and can be fixed if it is made floating instead. Floating worm needs some sort of tension to keep it pressed against worm gear - and that is where spring or magnets come into the picture. It is called spring (or magnetic) loaded worm gear and newer mount designs have it, like - iOptron mounts and little AzGTI by skywatcher.
  25. Let's try to establish that. I like your experiment. I think it clearly captures where disconnect is and hopefully we will agree on that after I answer your questions. First thing that I want to point out is that there are no true colors of the bird. Object does not have a color. Light reflected of the object has color and color of that light depends on properties of the source of the light and properties of object. Thus bird has properties related to EM spectrum - reflectance but not color. What Macbeth-calibrated matrix for DSLR does is help establish mathematical relationship between two things - measurement of spectrum of light (irrespective if that spectrum comes from the light source directly or reflected of object) - and display standard. Way matrix was acquired only determines domain of accuracy. We can use any illuminant we like if we work with reflectivity, and we can choose one that will be more accurate in color calibration in domain of interest - or we can use one that will provide us with relatively uniform accuracy across gamut of interest. This accuracy depends on how much spectral response of our sensor is different than XYZ standard observer. Imagine for a moment that we have a sensor that has the same spectral sensitivity as XYZ standard observer. In that case - you can use any illuminant and you will always derive perfect matrix - being unitary with zero color transform error. In this case - it absolutely does not matter what illuminant you use or whether you calibrate against emissive source. You will always derive same matrix. When sensor has slightly different response curves - it introduces slight error in this conversion process (already mentioned ill posed problem as it has no single solution) and here we can choose illuminant that will reduce error in one part of the gamut versus another part. Due to the way you designed your experiment, I'm confident that D65 calibrated matrix will introduce minimal perceptual error in transformed color information. So yes, I can just use D65 (or D50 for that matter) acquired matrix. I will maintain that colors are correct within above mentioned error and certainly more correct than using no calibration at all. If you mean white balance - then no. I'm not interested in how that particular bird will look like if I view it under certain conditions. I'm only interested in recording what the bird looks like under conditions that were present at the scene. It is very likely that 10 images what we capture in this experiment will all show different colors on the bird due to fact that bird is flying around and passing lights of different temperature (btw - intensity does not determine color of incandescent light - only temperature). We often use term white balance in astrophotography - but that terms should be avoided as correct term is color correction / calibration. White balance is term related to the color transform that allows us to change image captured under one type of illumination and make it look like it was taken under another type of illumination. In particular it is related to the fact that our vision system adopts to the type of lighting and we tend to see "white" objects as white under different illumination. No, as I said above - I'm interested in color of the light reflected of the bird in any instant rather than (wrong) notion of the "color of the bird". I would color correct each of 10 images in exact same way and they are very likely to produce different sRGB values - or different color to observer watching images on screen. No. Like said above - we won't be doing white balance. We won't be doing white balance. We will be doing RAW to XYZ color space conversion as a first step. XYZ color space is absolute color space and it does not have defined white point. In it we just record information of the captured light in handy triplet of numbers that is enough so we can reproduce that color later on with acceptable accuracy (of course - visible part of spectrum has infinite degrees of freedom but it turns out that we only need 3 numbers to record information that is responsible for human perception of color with enough accuracy). After that, if we produce image that has no ICC color profile - like regular image that will be posted online (jpg, png without profile), we will convert XYZ coordinates into sRGB color space that has white point of D65 and appropriate gamma function. We will not change anything in numbers to account for the fact that sRGB uses D65 - it is just the feature of said color space and we don't need to white balance anything to D65. There is well defined transform between XYZ and sRGB and we will use that. I agree that it is one of infinite set of possible transform matrices. However, we can easily answer if it is inconsequential or not. XYZ is absolute color space and we have defined metric (deltaE) that will tell us if two colors in XYZ color space (or more specifically xy chromaticity so we can skip intensity bit) are perceptually the same or not, or how close are they perceptually. We know what sort of colors we are likely to encounter in astronomical images. Since most stars have spectra that is very close to Plankian locus - their colors will lie very close to Plankian locus. We have emission lines. I believe that comprises 95% of all colors we are likely to encounter in astronomical images. Rest are reflection nebulae that are star light reflected of dust. These are also very close to Plankian locus as reflectivity of dust tends to be uniform across spectrum (star light is either shifted towards the red or towards the blue when scattered of the dust). All we need to do is take specially designed matrix that minimizes error for above present type of spectra and compare results to matrix from DSLR manufacturer and see how much perceptual error there is in obtained XYZ coordinates with first versus second. Perceptual error is small - we will get pretty good color match (probably below detection threshold in most cases). In fact - one can do that themselves easily - by using two different matrices - one derived by say D65 and other with another illuminant. One shoots raw scene and then does raw -> XYZ -> sRGB conversion with one and other matrix (no white balancing of any kind) - and compares two resulting images. You can't choose D50 white point if you are working with sRGB standard (you can but then it is no longer sRGB standard). If you follow the standard - you can't get different results. Say you want to determine color of a star in sRGB standard. Whether you use real spectrum of the star or black body approximation if you don't have access to the actual spectrum is not important - method is the same. Take the spectrum and calculate XYZ coordinates using standard observer. Convert to sRGB color space using standard transform from XYZ to sRGB (known matrix and gamma function - part of sRGB standard). There is no room for changing parameters - they are defined in standard. You can change things - but then you are producing values in some other color space and not sRGB and the fact that you are getting different numbers should not surprise you. That does not mean that color is arbitrary - it just means that you are arbitrary in your adherence to standard. Again - that is nonsense - not what you have written (I don't mean to insult you in any way) but the fact that sRGB standard somehow depends on the way you calibrate your display. sRGB is a standard and white point of sRGB is D65. You can't change it. You can calibrate your monitor to some other white point value and that is fine - but you need to create color profile for your calibration so that your operating system knows when it displays an image to do color space conversion so it will convert sRGB back to XYZ and then from XYZ to the color space of your computer screen. What feels like white - has nothing to do with recorded and reproduced color of light. I can record temperature of some object and say - it is 18C. I can then take some other object and regulate its temperature to be also 18C. You will touch this other object on a scorching summer day and you'll notice that it is pleasantly chilly to the touch, but if you touch it on very cold winter night - it will feel distinctly warm to you. What it feels like - has nothing with the fact that you can record temperature of an object and make other object be of exact same temperature. Similarly - there is no absolute white - what color you see as white will depend on you viewing conditions. But you can record a "spectrum" and replay a "spectrum" (although we are using only three values for it rather than many more - enough for color reproduction) regardless if viewer is viewing in right conditions or not. I as astrophotograper am responsible for recording and encoding information properly. Viewer is responsible for having their monitor calibrated properly - there is nothing I can do to change their viewing conditions. Yes, that is my bad - I did mean scientific colour constancy - and since it has been some time since I last used startools - I forgot the exact name. Furthermore - I was under impression that it is somehow related to scientifically accurate color calibration versus artistic (that is mentioned in other options) - but again, that is my fault for not understanding the terminology properly. Regardless of all of this - here is an idea that you might like to explore: Extract luminance and let that be stretched in any way so desired (no need to do particular stretch). Find RGB ratios of color calibrated data as r/max(r,g,b), g/max(r,g,b), b/max(r,g,b). In order to transfer color onto luminance use following approach: srgb_gamma(inverse_srgb_gamma(lum) * rgb_ratio). That way you are treating stretched luminance not as luminance - but as light intensity that affects RGB ratio. Visually user will be happy with their processing of luminance, but since it is displayed - OS assumes sRGB values so real intensity is gamma encoded - you need to move it back to linear - apply rgb ratio and then again encode it to sRGB for final presentation. R, G and B should be derived from raw -> XYZ -> linear sRGB using any available color correction matrix (raw -> XYZ). Even if that means using D65 derived one. You might not agree with me on merit of doing so - but why not give it a try and make possible for users of your software to do it?
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.