Jump to content

vlaiv

Members
  • Posts

    13,263
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. I'm not sure I'll be able to do much more than original processing with this data. I managed to get this as luminance: Tried to apply synthetic flats, but I don't think I quite managed to do it - three is zone below the galaxy that is till brighter than the rest of the background. Background is now nice and even - but I don't think I managed significantly more signal in the target itself.
  2. Yes - to high resolution in arc seconds per pixel - or too zoomed in if you will. It is a bit like using too high magnification for scope and conditions. You get zoomed in fuzzy image that gets darker as you add magnification. After you finish stacking, you can export 32bit fits file. I think that if you post that file (it might be a bit large) - people can help with processing steps to show you how you can get the most out of your data.
  3. Integer resampling https://pixinsight.com/doc/tools/IntegerResample/IntegerResample.html Factor of x2 - method average
  4. You can do it in ImageJ (Fiji - which is distribution loaded with plugins) https://imagej.net/Fiji/Downloads File / import / image sequence ... Image / transform / bin ... File / save as / image sequence ... (in each step you'll get some options, but I think they are self explanatory)
  5. Maybe after cleaning and remounting of the lens - ED version of the scope could be purchased, lens swapped and cost offset by selling achromat lens in OTA with plain focuser?
  6. Not sure what you mean by spider vane, but if it looks like this: only red in color - than that is perfectly fine. Newtonian scopes show diffraction spikes on all bright point like objects. Mars is probably too small at the magnification that you are using and it looks almost star like so you are not seeing disk of it. If you turn your scope to any bright star and focus it - it will look like above image (except for color - different stars will have different colors, this one is blue) - again, this is normal for newtonian scope and shows only on bright objects. It won't show on mars when the planet is larger and you use more magnification (it will show - but it will much fainter and it won't bother you at all).
  7. You've cropped that image considerably, right? With DSLR camera and very long focal length scope - you are way oversampling at 0.38"/px. In order to overcome limitations of that, you'll need to employ couple of techniques. First - use super pixel mode debayering. Second - you'll need to bin your data, maybe even 3x3. Ideally, you need long exposures and lot of them. When oversampling like that - worst thing is to go for 30s exposures as read noise becomes very dominant (signal is too faint per exposure due to very high "magnification"). Could you post stacked linear data? I think quite a bit more can be "squeezed" out of it by doing the above tricks.
  8. I advocate doing that in linear phase after gradient elimination / background wipe and before any further processing.
  9. I don't think so, but can't be 100% sure. Problem with corrective optics like coma correctors is that they do fix coma further away from optical axis - but they often "mess up' things on axis. Optics often ends up not being diffraction limited. This is not an issue for DSO / long exposure astrophotography as star sizes are dominated by other factors like seeing and mount performance - but it will be detrimental for lucky imaging where you want the most sharpness that you can get. For example - simple 2 element coma corrector is known to introduce spherical aberration. Even very expensive CCs don't have nice spot diagrams. Here is example from SharpStar quite expensive coma corrector: As you can see - spot is smaller at 13 and 18mm away from optical axis than on axis (both geometric and RMS radius of spot diagram). You don't need 2" barlow if you are going to use only central portion of the sensor - you can use 1.25" barlow. If for example you have diffraction limited field of 3mm - then using x2 barlow will turn that into 6mm and using x3 barlow will turn that into 9mm. All well within 20+ mm of clear aperture of 1.25" barlow. In fact, if you want to utilize larger field - you can get special barlow. APM has coma correction barlow and while not cheap - it is considered one of the best barlows. https://www.teleskop-express.de/shop/product_info.php/info/p5662_APM-Comacorrected-1-25--ED-Barlow-Element-2-7x---photo---visual.html However, that barlow will require some experimentation on your part. Barlow elements can be moved closer and further away from sensor to vary their magnification. This is very handy as you can dial in exact F/ratio that you want to image at. With this barlow - there is "working distance" as it works as coma corrector. It is designed for F/4 scope and x2.7 magnification - which means that working distance is defined: If you change working distance - you'll change magnification factor, but you'll also change coma correction of it. You'll have to experiment with size of the field you want to image to see if this barlow will correct that field at your wanted magnification. Other than that - you can choose any simple barlow that has removable element that you can then position at wanted distance from sensor with different extension tubes - and you can calculate (or measure) magnification of it. For example: https://www.firstlightoptics.com/barlows/baader-classic-q-225x-barlow.html But if you want the best barlow, then no doubt it is this one: https://www.firstlightoptics.com/barlows/baader-vip-modular-2x-barlow-lens-125-and-2.html
  10. If you shoot with broad band filters you can actually find scaling factors from F2 class star in the image - it needs to be roughly white - so measure pixel values in such star (it's best to measure average value on star selection in each channel) and derive scaling factors from that. Alternatively - what you are doing is also fine - fiddle around with values until you find what looks natural. Problem is that what we think is natural is often impacted by other images of the object that we've seen so far - and those might not been properly color calibrated - so our sense of right might be wrong With this technique you can get pretty good results - I did it as a step in above example to demonstrate - and it was rather easy as I had gray patches in the image: reference again, and just simple RGB balance: gray patches are grey - so that is fine - but colors don't match as well as they could. Overly saturated, some are darker and some have noticeable different hye as well. Gamma is also not good - which can be seen in gray row gradient - it is not uniform as in reference image.
  11. For future reference - you can upload your image to nova.astrometry.net and get it plate solved - it will detect objects in the image and annotate it.
  12. One of the problems is that blue and red are completely compressed while green seems stretched: Trying to correct anything on attached image is not going to work as red and blue are effectively destroyed by 8bit being squeezed in lower part of histogram (effectively posterized to handful of values). Indeed - we would need linear data in order to properly diagnose things.
  13. Because split on emissive and reflective case is artificial - you always work with "emissive" case - you always work with emitted light. Reflective case is distinguished just because people thing that object have "inherent" color - they don't. When recording a scene with camera - you are always working with light. Regardless of how that light reached you - directly from source or reflected of something. Selecting of distinct calibration spectra will only make your color calibration more accurate in one region of human vision gamut and less accurate in another region of human vision gamut. Using Macbeth chart - either in reflective (using any of standard illuminants) or in emissive mode (like displayed on computer / phone screen) is just a selection of calibration spectra that will work good for sRGB gamut. You can equally use something like this: It is just the matter of knowing what you are doing and having a good reference (calibrated computer/phone screen, standard illuminant or DSLR to calibrate against). I did watch the video. Of course that you can measure brown/orange. You can measure physical quantity / spectrum of the light. You can't measure human response to it (maybe you can if you scan the brain or stick electrodes in it) - but you can reproduce human response. Here we are talking about measuring physical quantities and then replaying those to people so that their visual system can do the rest. That is all. Don't confuse psychological response to a stimulus with measurement and reproduction of that physical quantities that represent that stimulus. In any case - look my previous post. Top left corner is brown. My phone showed it as brown, my camera recorded it, I color calibrated the data and now it shows on our screens as brown. Same goes for orange - one square below. I find it interesting that you implemented something in your software that you believe is wrong. Why did you do that? By the way - from what I've seen, it looks like your implementation is wrong. Most if not all images that I've seen processed with StarTools don't produce accurate star colors. Star colors fall in this range when viewed on sRGB screen (Plankian locus): http://www.vendian.org/mncharity/dir3/starcolor/ Or from wiki on stellar clasification: https://en.wikipedia.org/wiki/Stellar_classification Although you sport "Scientific" color calibration in StarTools - it never seems to produce such results. I wonder why? By the way - if you treat Luminance as intensity of light you can still maintain proper RGB ratio and apply 2.2 sRGB gamma, regardless of initial stretch on luminance. That way you get proper stellar color.
  14. In any case, for all interested, here is an example that I did with ASI178mc-cool camera I took this image: (same one that I shown in my first response on the topic) and displayed it on my phone screen. Then I took ASI178mc-cool with lens and made recording of that screen. Here is raw data from that recording: Colors have distinct green cast and don't have proper gamma. Any camera that has high sensitivity in green (and most do since our vision is most sensitive in green so no need to make different sensors) will have such green cast if we take raw data. Exceptions are cameras that have high sensitivity in red like ASI224. I then applied derived transform and properly encoded color in sRGB color space and this is result: Image is a bit brighter than original due to used exposure and colors are not 100% the same - because this is recording of my phone's screen - I'm not sure how calibrated and accurate it is in color reproduction. In any case - this shows that you can reproduce colors fairly accurately - and those from emission source - colors coming from screens are pure light - not reflected of objects under certain illuminant.
  15. Again condescending tone, but let me address your comment. I acknowledge that Camera to CIE XYZ conversion is ill-posed problem and that you can't map complete space with 100% accuracy. However, what you can do is take response curves of actual sensor and CIE XYZ standard observer and take a set of spectra that is good representative of your working data and create conversion matrix that will minimize deltaE for example - perceptual color error. In astronomy, Plankian locus together with prominent emission lines would be good candidate for deriving transform matrix. Problem is that sRGB color space that is most widely used on internet (and any non color managed image is assumed to be encoded in sRGB) - does not render any of of the spectral colors, so alternative would be to use sRGB colors that are closest (deltaE metric) to spectral colors of prominent emission lines. On the other hand - every camera manufacturer can do the same using a standard set of spectra and do basic Raw to XYZ transform matrix that covers most of raw gamut equally. In fact - they do that and you can extract XYZ color information for every DSLR. Here is part of DCRAW command line utility: It knows how to extract XYZ color information from camera RAW because it has required transform matrix. If color matching theory did not exist - we would not be able to take our DSLR cameras and shoot scenes and capture faithful color (not color balanced - but faithful - meaning not color of object, which does not exist on its own, but the way - but color of light). Further - Illuminant is irrelevant for astrophotography as astrophotograpy deals with emissive case rather than reflective case. We are not interested in color of object under certain illuminant - we are interested in color of the light reaching us. No but spectrum is has color. In fact - there is infinitely many spectra that will produce same non spectral color to our vision system (spectral colors are only colors that have single spectrum that defines them). From that it leads that any color you pick can have associated spectrum - any of those spectra. This is a basis for color matching - we don't need to record actual spectrum - we only need to record information that will allow us to generate spectrum that will trigger the same color response in our visual system (eye/brain system). I don't see any problem in mixing wavelengths arbitrarily in spectrum - again, such spectrum is completely defined - and again, there will be be a color behind that stimulus. Even brown, what is problematic with brown? Fact that it does not appear on rainbow? Well many colors don't appear on rainbow - white color is not on rainbow. Can you see the brown? Can your computer screen produce brown? Well - then there is spectrum that produces that stimulus in your brain. Camera can record that spectrum and monitor can reproduce it. In this particular case - I don't feel the need to ask any questions - as I have solid understanding of the topic. I gave answer to the question at the beginning of the thread. It is you who started misleading others in your response.
  16. AzGTI is very portable imaging platform - but very limited one as it can carry only small scopes for astrophotography. Most people limit scope that they use for astrophotography to 70-80mm APO refractors on that mount. It will be for wide field photography only. On the other hand - it will let you get into astrophotography straight away if you already have DSLR and a lens. All you need to do is upgrade firmware to one that supports EQ mode of operation - add a wedge (which can be simple ball or 2d photo head - or dedicated wedge from couple of vendors) and counter weight and you are ready to go. Counterweight can be DIY for very little money. I made one from a piece of threaded rod, bunch of washers and couple of nuts: Here it is with DSLR mounted on L bracket - with simple ball head used as a wedge: Only drawback of using ball head as wedge is polar alignment - it is messy as you don't control each axis separately - ball head just moves all over the place. Some people have much more elaborate setup with this mount: ( @david_taurus83 - for example, image taken from another thread on azgti payload). Other than that - mount is good for tracking at high power. Some people like that it has goto capability - but I find the best use that it tracks planets / moon nicely. This lets you relax while observing without the need for constant adjustment of the scope to compensate for earth's rotation. It is also handy if you want to share the views with someone - with manual scopes there is always a chance that target will drift out of view before you manage to swap - and less experienced will have trouble finding it again (say you want to show something to your friend - by the time you explain how to look thru the scope and adjust focus - target may have drifted outside of FOV of eyepiece). Goto is sometimes nice - especially if you have trouble finding what you are looking for .
  17. It absolutely does not contradict that statement. It states that violet is spectral color - meaning produced by single wavelength of spectrum, while purple is not - which means it consists of multiple wavelengths. However - both have certain spectrum. Violet spectrum consisting of a single wavelength, while purple - consisting out of multiple wavelengths. Here is quote from article: You are confusing terms spectral color - and color of certain spectrum. First one means that color exists as a single wavelength (exists in rainbow spectrum) and second means that certain spectrum - in this case energy distribution over range of wavelengths defines color that we see. Again it does not - I did not say that such color transform matrix will be 100% accurate - in fact, I did mention that both target color space and camera have gamut and that colors are matched between those. If camera does not record wavelengths above say 600nm - it can't record spectral red for example - it lies outside of camera gamut.
  18. Not necessarily. EQ will have advantage if you want simple tracking like single motor tracking that is quite nice to have when observing planets. EQ3 will certainly hold Skymax127 - but so will cheaper AZ alternatives. I've got scopes similar to those that you are considering and I'm now rather sorry that I did not do comparison between them - which could help you. Unfortunately, I don't do much active astronomy due to commitments, but hopefully that will change as soon as I move to darker location. In any case - I've got both Mak102 and Evostar 102 - so basically 4" versions of two scopes that you are looking at. I use Mak102 on AzGTI mount and I'm fairly happy with that mount. It holds little Mak without any issues and is very simple to use. I upgraded firmware so I can now use it in both AltAz and Eq mode (which is handy for wide field astrophotography with lens and short focal length scopes - but requires wedge and counterweights to be added). Only issue that I have with AzGTI is that it uses mobile phone / or another android device to operate it. You can connect handset to it - but it is rather expensive to get one. Problem with smart phone that I'm having is lack of tactile feedback. If I'm looking at the eyepiece and want to move the scope - either for observing or when doing alignment - I can't do it without taking my eye of the eyepiece - phone screen is flat and I have no idea which "button" I'm pressing - or if I'm pressing any at all. Other than that - I was rather surprised with how sharp image is in little Mak. Many people say that they feel boxed in with Mak - I don't feel that way. That little Mak has focal length only 100mm longer than my main observing scope - 8" F/6. You'll never hear that someone is feeling "boxed in" with 8" F/6 dob and it is considered exceptional both beginner and advanced amateur scope. It has 1200mm of focal length while little Mak has 1300mm. SkyMax127 will have a little more focal length at 1500mm, but that is the same focal length of C6 and lower focal length than C8 - popular Schmidt Cassegrain scopes. I don't often hear that their owners feel boxed in by their scopes. From what I've written above - it would seem that I prefer the Mak - but that is just because I had more observing time with it. I really used Evostar 102 only once or twice. I do however feel that Mak is going to be better on planets in direct comparison - will need to check it though. Evostar 102 has 1000mm of focal length and 2" focuser (same as Evostar 120 really) - so it can go quite a bit wider than little Mak - it can show about x2.5 more sky if I'm not mistaken. Interesting link: https://www.firstlightoptics.com/maksutov/sky-watcher-skymax-127-az-gti.html
  19. But we have just seen that I can learn thru a zen moment like watching cows graze on pastures - or not? I think that it is mandatory for life to evolve predators. Chain is pretty straight forward I think and in each step - you gain higher quality energy source: sun + inorganic -> organic (metabolism byproduct) -> organic / dead -> organic / live
  20. I'm happy to address said question as soon as I understand what predation really is.
  21. Any details on capture and processing for those lovely images?
  22. Certainly interesting - but more on DSO side than on planetary side. None of those objects will be resolved in any capacity and require long exposure images. Some of them will require special handling when stacking - if they are moving fast enough with respect to background stars. A bit like comet stacking.
  23. Not sure if I can. Maybe I put it like this: say you have subs containing galaxy and background. Background does not contain any signal - it contains only noise and background should be stacked using one set of weights that depends on noise in the background in each sub in order to get smoothest possible background form your stacking. Galaxy contains signal - and it has S/N ratio and it needs to be stacked using different set of weights. This is basic scenario that shows that using single weight per sub is not optimal solution. In this case better solution can be had by using two different weights for each sub - one relevant for background and one for galaxy. There are number of noise sources in the image that vary during the night - one is level of light pollution - as people turn on/off the lights or the Moon rises sets. There is transparency that limits how much signal you are getting, but there is also atmosphere and as target moves across the sky - light needs to pass thru different thickness of atmosphere and attenuation of light from the target changes. You don't need to have two separate sessions in order to have subs with different levels of S/N ratio in same regions - and S/N ratio varies from part of the image to other parts (as we have seen in galaxy / background example). All of this shows that stacking with single weight is not optimum solution and better solution can be used - not only for stacking multiple sessions under different circumstances - but also in single session. I presented way on how to do that that I came up with. As far as I know - no software other than the plugin for ImageJ that I've written - does not support it, and since I did not publish paper on it - it is unlikely that any software will unless they read this or similar discussions where I mention it However, I've been writing about it previously and actually tested it on real data - I had a night of imaging when my RC was affected by dew (very poor conditions - and single night that it happened - as that usually doesn't happen). Let me see if I can find that thread.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.