Jump to content

vlaiv

Members
  • Posts

    13,263
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. I'm starting to feel that we are entering philosophical domain here and we are really asking - how can we tell if measurement is authentic? Let's say that we agree that measurement is authentic - as a postulate. Taking image is actually a measurement. Both professional and amateur image start in the same way - there is not much difference there - it starts as measurement. What happens next makes the distinction. Scientists do know it is a measurement and they see it as data that needs to be handled carefully. For them image is just visualization of the data and you need to preserve data integrity first. I guess amateur astrophotographers see that data as basis for creating the image - like clay that will end up being a vase. I actually like to see myself as belonging to scientists camp but in reality what happens is that I switch to the other side - just because I want to make presentable image out of the data. When called for - I can perfectly preserve data integrity and still make image and if you follow certain rules - you can take data and make simulation of what can be seen with eyes (thus preserving authenticity on several levels like we discussed above). This is because rules for that are well defined. Maybe good distinction would be - if you can take image and still do for example photometric measurement on it - then it is authentic on that level - it preservers intensities. If you can make relatively accurate astrometric measurement - then it preserves shapes. If you can determine stellar class by examining thricromatic value of it - then it is authentic in color on one level. If it is displayed on some display device and produces the same color as actual star would produce to human eye - then it is authentic in color on another level. Way you handle your data can ensure that authenticity is either preserved or destroyed. For example - I can do photometry on stretched image (non linear) - if I know exact parameters used to stretch image. Even pretty image can be used for this as long as rule is written down with image (as metadata or whatever). We could argue that such image is authentic in that regard? But if you open photoshop and just fiddle around with curves until you like result - well, unless you save that action - even yourself can't really repeat that 100% and let alone do inverse of it.
  2. It can be only slightly improved but not corrected - or at least I did not try hard enough? I recently started wearing reading glasses and when I had eye examination - I did point out that I have astigmatism and that it can't be easily corrected, so optician tried - but no luck. I was wearing glasses when I was a young lad, but decided to stop wearing them and since I was farsighted - things actually improved on their own, so one eye corrected to excellent vision (very sharp as well) - and brain just started primarily using that one. When I started being interested a bit more in optics in relation to astronomy - I did some tests on that eye - I examined PSF of star or distant light and this is what it looks like: Single star is smeared into three images - connected but having distinct points of light. Moon for example gives tree images in that same order - well two top images overlapping as distance is not that great and bright patch between all three prominent images. I did try to view at high power in order to reduce astigmatism with small exit pupil and I have the feeling that things did improve somewhat - but I could never achieve sharpness and clarity of my left eye, regardless of focus position and pupil size.
  3. I think that there are more levels of authenticity. Authenticity of color for example is one - which would be defined as - image produces the same color as would be seen by eyes if light was strong enough Authenticity of shape - where spatial distribution of perceived intensity of light matches that of object with certain projection Authenticity of ratios - where ratios of light intensity displayed matches ratios of light intensity of object Authenticity of instrument response - when image contains differences to real life due to instrument used Authenticity of information - where information is captured that is beyond human senses yet is displayed in such way that it conveys information of the object (like narrowband images and images created with information in other parts of EM spectrum) Etc. Astrophotography that we are discussing is 99% of the time in "breach" of any number of above. Arbitrary color balance and color saturation certainly distort authenticity of color and the way we stretch images distort ratios. But this is either decision or lack of knowledge - if one chooses to do so - they can minimize distortion of both. Most of the time authenticity of shape is preserved - except in wide field images where it is impossible to do so as it is mathematically impossible to map sphere onto a plane without distortion. Sometimes authenticity of instrument response is also distorted - people add star spikes / remove reflections and such Some are mutually exclusive - for example authenticity of color and instrument response - we do color correction and change data to match human vision and loose information on camera+filter QE response in the process. By the way - two cameras will match recorded colors as long as colors are within their gamuts and therefore will create identical images consisting out of those colors.
  4. Actually no, or at least not in sense that algorithm used is not left to designer discretion but is rather well defined. There is standard developed to assign certain values based on frequency response of recorded light. That is CieXYZ value of each color and corresponding spectrum. Since whole spectrum is compressed into three values this is by definition lossy transform, but it is closely modeled on human vision and human sensory response will be the same for all spectra that give the same XYZ values. Contrary to popular belief - each color does not have unique spectrum but rather infinite set of different spectra can and do produce same color in human vision. Knowing these three values and having light sources that produce each of these three spectra: you can produce equivalent spectrum that will induce same psychological response in human being as original spectrum. Although you don't capture exact spectrum - you do capture important information to recreate perception of original spectrum. There is no freedom in this part so far. Above matching functions are well known and modeled on human vision. First step in process of creation of faithful image is recording some values from camera response (what ever camera QE and filter responses are) and doing linear transform (since light response is element of vector space) to above well defined CieXYZ. This is what color correction in cameras really does (color balance does something different - but in exact same way). Once you have CieXYZ tristiumuls value you can then use that to convert to color space of your choice. In particular - computer and phone screens and internet devices all expect sRGB color space - That color space again is well defined. Well calibrated computer screen will take sRGB tristimulus value, decode it and produce again spectrum (by mixing three light sources of certain spectra that we see as red, green and blue) that will create the same psychological response in human observer. In another words - if you have red torch that has particular red cast that is within gamut of both camera and sRGB monitor and you take that camera and shoot the light of that torch at faithful setting and then save that as jpeg format (again using sRGB as standard) and display that on properly calibrated computer screen and ask people - is that the same color as color of torch light - you will get the answer, yes that is the same color. It is important to understand that R, G and B are not arbitrary red, green and blue hues and they depend on actual color space used. If you have something encoded in "RGB" model but you don't have well defined hues of these three colors - you don't have color information. In sRGB color space - we have well defined R, G and B components - we know their XYZ values - for all three of them and that is basis for our linear transform between spaces. There is also CieRGB space - that is also RGB space, but R, G and B have different red green and blue color to sRGB color space. Same for AdobeRGB and other RGB spaces. Here is an overview of RGB like color spaces and their primaries: If you follow above rules and do proper color management - then there is really not much artistic or left to your designer choice in creating a documentary image. Everything is well defined with purpose of producing perceptually the same color / vision response in human subject.
  5. Would you characterize following scenario as "authentic" or "artistic": You take DSLR camera, set it to faithful color reproduction and take an image of a scene and transfer it to computer without any further processing.
  6. That is what I thought as well - hence question on optical quality. There is another apo triplet - and this one is 70mm, quite a bit more expensive (again cheaper than x2 70mm triplets) but with 1.25" eyepiece connection. https://www.teleskop-express.de/shop/product_info.php/info/p3830_TS-Optics-70-mm-APO-Binoculars-with-1-25--Eyepieces-and-90--View.html That one is listed as good for astronomy as well. I wonder why would anyone go thru the trouble of designing triplet for binoculars if not to get good performance out of them? It looks like that first binocular is the same as APM 20x80 ED APO that received favorable reviews. With 300mm FL it is F/3.75 - and that is really fast.
  7. My case of cloud sickness is still hitting strong and I can't get rid of these crazy ideas - always searching for new project (although I have like tons in backlog waiting for better weather conditions). I'm sorry in advance for causing any grief among binocular lovers, I truly am. Part of the problem is that I have one good eye (other is astigmatic nightmare) so I observe with telescopes only and really have not developed proper appreciation for binoculars. Here is the thing - how good is this binocular optically? https://www.teleskop-express.de/shop/product_info.php/language/en/info/p1421_TS-Optics-20x80-ED-APO-Binoculars-with-two-Triplet-Objectives-and-Tripod-Adapter.html I see that it is triplet and it has ED and APO monikers in the title. It is 80mm of aperture and I'm guessing somewhere around 320mm to 360mm of focal length (like F/4 to F/4.5? possibly). Maybe even F/5 optics - given the size of binoculars? I'm going to continue talking and if you feel uncomfortable at any time - just don't read the rest . It costs 197€ without VAT. That is less than 100€ per what seems to be 80mm F/4-F/5 apo triplet objective. See where I'm going with this? Sure, you would need to provide tube and focuser and show some serious DIY skill - but hey, 80mm fast triplet APO under 300€ (another 100€ for focuser and rest for other materials) - well that is dream to lot of people. Maybe imagers won't be interested because of residual chromatic aberration - but for wide field visual and EEVA - I don't see why not. Maybe even "transplant" would be possible - ST80 body and above APO triplet lens. That would be even cheaper and with less DIY skill (although worse focuser). Alternative to such scope is maybe SW equinox or similar - but that is still F/5.25. Are these objectives diffraction limited? Do binoculars have lens cell or are lens housed in binocular body?
  8. Find e/ADU value for ISO800 Take one bias and one dark sub. Convert to electrons by multiplying with e/ADU value for given ISO. Subtract bias from dark and measure mean value. Divide with dark exposure length - that will give you dark current at given temperature. Dark current depends on temperature and roughly doubles or halves each 6 degrees C° or down respectively. That will give you means to calculate dark current for given exposure time in given ambient temperature. Dark current noise will be square root of that. Light pollution will require that you take exposure of actual sky. It will vary depending on where you point your telescope - so point it to where you usually expose targets (maybe do few exposures in different directions). Again - convert to electrons, calibrate with darks and select background sky in center of the frame (that avoids need to do flats) and get median value. If you can skip stars in selection - if not, median value will deal with them quite good (as it ignores outliers). This will give you LP signal per exposure - divide with exposure length to get per second - and again square root will be associated noise (once you multiply with wanted exposure length).
  9. Yes, either buy that one or DIY one. It is really just a FTDI chip (that costs something like $6-$7 on eBay and RJ45 cable connection). If you want to DIY - look at EQMod project EQDirect cable: http://eq-mod.sourceforge.net/eqdirect2.htm (usb version - note that for Heq5 you need 5V signaling, so take care to get exact FTDI chip). You'll need to get one unless one is delivered with the scope. TS website is a bit confusing with respect to that: They have image showing what is included and dovetail bar is there. They have description of item and last point remarks that you get Generous equipment (that is sort of true for OTA only version - usually one just gets rings with OTA) - including dovetail, but section In the Box does not list it for some reason but it does list 2" 35mm extension used for visual to reach focus - which is not mentioned nor depicted above. There is huge chance that you'll be missing something, but unfortunately - you'll only know at the moment you realize you are missing it
  10. I think it certainly will work. Most of coma correctors work rather good with parabolic mirrors - it is subtle differences in the image - like edge star correction or if stars in center of the frame are sharp enough (like without coma corrector). From a quick search that I did on topic - I think that Baader MPCC III is probably best CC in that price range.
  11. I know very little about coma correctors from practical point of view. I know some theory like 2 element ones introducing spherical aberration and making stars a bit bloated in center of the field and 4 element ones being the best - but I can't really be certain. I'm not sure I would be able to select coma corrector for myself if (or rather when ) I needed one.
  12. Mosaic is your friend It is a bit of a pain to process large number of panels, but it is worth it - you can get very sharp large images (think poster sized sharp Moon that you can print and give away as a present - not bad idea).
  13. No unless you specifically want F/6 for specific reason. We could say that 150mm F/4 Newtonian is too fast for a beginner and similarly we could say that 150mm F/6 is too slow for a beginner. That telescope has 900mm of focal length and that is a bit too much for general astrophotography. It is good for high resolution work or if you know advanced processing techniques like binning and such. Issue with long focal length is that you are imaging at high sampling rates which makes every error in mount performance much more obvious as things are zoomed in. Depending on camera used with the scope - you could well end up at resolutions not supported by your sky and mount. This reduces your signal to noise ratio in image (light spread over more pixels) and increases blur (again too zoomed in). I would recommend F/6 version for someone wanting to split telescope use between visual and photographic with more emphasis on planetary work and EEVA then on long exposure DSO. F/6 scope will not need coma corrector with small sensor camera (like planetary type camera) as it has 4.2mm diameter coma free field (and then it slowly increases). You can even use focal reducer with small sensor. This image was recorded with F/6 newtonian (8" one) and x0.5 reducer with camera very similar to ASI120: But F/6 scope has smaller secondary and hence smaller fully illuminated field and is not as suitable for work with larger sensors like APS-C - in any case you still need to use coma corrector for it.
  14. Creality Ender 3 V2 (simple enough, right? )
  15. Why not use Stellarium? Just set it up to show objects of interest and dial in your observing time and see what is closest to meridian (objects are highest in the sky when crossing meridian). You can also use oculars plugin to see how it will fit onto sensor or what magnification eyepiece to use on that target. Make yourself list of interesting objects and star hop notes in case you have manual setup.
  16. I'm not familiar with Siril, but I have had discussion about PixInsight Photometric color calibration. One must be careful to understand what tool does. Sometimes when photometric color calibration term is used - what it implies is "color" in photometric sense, or quantity that is called color index - or B-V color index (there are other color indices depending on photometric system used): https://en.wikipedia.org/wiki/Color_index Same principle is used for regular color calibration - photometric measurement of star in different channels - but this time color is represented in sRGB color space, since most devices and whole internet expects sRGB color space. Two are different (photometric color index and sRGB color triplet) but principle of calibration is similar. Just make sure that tool that you are using does what you expect it to do and not something else. If you want to display your image on computer screen or publish on internet - use sRGB color space (for print use CMYK color space or in case of digital printer - use supplied color profile. In that case it is worth using extended color space like CIE RGB or Adobe RGB in processing - but then you need to calibrate your RGB channels for that color space prior to processing in Photoshop or Gimp or similar). (for reference, sRGB colors vs star class is given in following chart): as you see - these are different than above B-V colors and are proper visual colors (G0V star never has greenish appearance visually like in above chart)
  17. 99.9% all astrophotos are made by method of stacking. I you were to read how it is done in terms of mathematics involved - you would certainly think it is overkill to know all that - and you are right. As far as making a photo, you don't need how to do any of that - as long as there is software that will do that for you. It is exactly the same with method that I've written above - it can be literally reduced to single button with caption - "calibrate color". There is plate solving - that will help software identify each star. There are catalogs (like Gaia DR2) that contain temperature information for bunch of stars. Software can recognize stars - that is how alignment part of stacking works. Once approach is implemented in software - then it is just push of a button really - and you would not be thinking about it except to the level - "select color correction type: 1. earth bound, 2. solar system bound, 3. orbit of object"
  18. Probably the best planetary camera. There are three things that are essential when choosing planetary camera: 1. low read noise 2. high quantum efficiency 3. fast FPS ASI224mc has it all, and it is ahead of most other cameras with respect to these three characteristics.
  19. But you can Say you image Rosette Nebula. There are stars embedded in that nebulosity, right? Stars have spectra and from their spectral analysis we can figure out what spectral class particular star is. We know temperature for each spectral class and we can know what color particular star is because of its temperature. Plank's law. All stars have one of these colors: Here we see sRGB triangle on XY chromaticity diagram and Plankian locus marked with corresponding temperatures. We can take certain star in the image that we know is at same distance to our object and check expected and recorded color of that star - from that we can infer any interstellar reddening and devise inverse transform. For galaxies - we can do statistical analysis because we have idea what sort of colors we can get and we know relationship between temperature and luminosity and again we can do calibration on whole galaxy - fit the data to what is expected and again derive transform by statistical means (it will not be %100 correct - but think of it as fitting a curve on set of data - it won't hit each point exactly but it will be good approximation to set of data).
  20. Depends on what coma corrector you get. Not all require same distance. You'll need T2 ring for your camera model (it is T2 to particular lens mount so T2 - EOS or T2 for Nikon or similar). Then check out coma corrector working distance. Easiest ones are the ones designed to work at 55mm - you don't need any extension tubes for those as T2 ring is usually designed to match flange focal distance of camera up to 55mm (for canon it is 11mm + 44mm of flange distance = 55mm total). Think about guiding and also consider flat panel that is suitable to your scope so you can easily do flat exposures. If you don't mind DIY - flat panels are quite cheap to make - you need LED strip, some electronics, diffusing panel made out of acrylic plastic or something similar and housing. Alternative is something like this: https://www.teleskop-express.de/shop/product_info.php/info/p3512_Geoptik-30B304---Flatfield-adapter--D-210-mm--for-flat-field-imaging.html or this: https://www.teleskop-express.de/shop/product_info.php/info/p8240_Lacerta-LED-Flatfield-Box-with-185-mm-usable-Diameter.html (both are rather expensive). There are also other alternatives - like screens for tracing paper or similar - do search on flat panels / boxes to see ideas.
  21. Yes but not now. F/4 scope is excellent instrument but one really needs to have quite a bit of experience imaging with newtonian scopes in order not go crazy / start pulling their hair out. Faster the scope - more issues are likely. Things like tilt, collimation, focusing precision, illuminated and corrected field ... All are ready to cause grief to even experienced imager. Stick to F/5 for now and maybe in some distant future - switch to F/4
  22. That one is very good AP telescope - in fact made for AP and similar to 150PDS by SkyWatcher (maybe even better made).
  23. These two are quite different images - one that you first posted and this other one. Picture is certainly not fraud and to be honest - I don't really know what you mean by "fraud"? In any case - processing is different in two images, they were not taken on the same date / time as illumination is different. You can see from these two images that sun light is falling at a slightly different angle and thus illuminates marked region in "fraud" image a bit more than in Nasa one. Why would you suspect that one image is fraud at all? As Moon is on the orbit around the Earth - every 28 days or so - we get almost the same looking Moon - illumination is the same and everything (there are small differences in size and orientation due to libration, but in principle things are the same). Two images taken months apart - but showing the same phase of the moon - will be remarkably similar if scaled to same size.
  24. Here is good scope in that category (but it is expensive): https://www.bresser.de/en/Astronomy/Telescopes/EXPLORE-SCIENTIFIC-AR102-Air-Spaced-Doublet.html I would consider adding some more money and going for that TS 4" ED scope that I linked above instead of this achromat.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.