Jump to content

vlaiv

Members
  • Posts

    13,263
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. I will be honest and express my doubt with claims of extraordinary sharpness of 4" instruments and ability to use over x300 magnification and such. You can certainly use x300 magnification but what would be the point in doing so - when you can see all that detail with x100 magnification already. You'll just get blurrier larger image. Also - 8" newtonian is going to walk all over 4" instrument. That is my belief anyway and scientific side of things agrees as well. In fact - we can see this in practice rather easily. @CraigT82 recently did comparison between two lunar images - one taken with 4" instrument and another with 8.75". While I'm certain that his image is excellent (8.75" instrument) - mine was also described as at least fairly good - so I have no doubt that it is representative of decent 4" class instrument. difference is more than obvious. By the way - image to the left - is like using x300 magnification. While you can use it, I see no point in doing so over perfectly sharp and nice about x150 for 4" instrument: I also thought of very interesting experiment for anyone wanting to compare planetary performance of two different telescopes - one that excludes impact of atmosphere. You need ~60 meters of clear space, and one of these: Yep, it is a marble - about 10mm in diameter. Get nice colorful one with a lot of detail - to represent Planet X that you'll be observing. Observe during the day at distance of about 60m. That distance will be enough for spherical aberration not to cause any issues on most scopes up to 8" and also at that distance 1cm marble will have angular size close to Jupiter's angular size - about 45". In fact, I just saw that you can get one of these: That might not be a bad way to get regular planetary observing fix in times of prolonged poor weather In any case - that will be good indicator of what particular instrument is capable of - without influence of atmosphere.
  2. I think you are right about doing the actual measurement - that is the best way to go about it, but here is my reasoning. Regardless of the level of LP, difference between 7nm and 3nm filters remains a fixed ratio - 7nm filter will let in roughly x2.33 more light pollution. In LP skies it will let in x2.33 more LP and in dark skies and very low LP it will still let x233 more LP thru. In high LP skies - both 7nm LP and 3nm LP will be high compared to shot noise and significant contribution to overall SNR - so you can't stretch signal very much and achieve good contrast. As you go to darker skies - there will be signal strength for which 7nm LP is still significant so you can't stretch that signal and get good contrast, but for 3nm - and it's x2.33 less LP - it is starting to be less significant than shot noise - and you can further stretch it. Now that I'm writing this - I realize that both LP strength and target strength play a part here - so it depends both on where you image and what you image (how faint it is and what is your aimed total exposure).
  3. I had ST102 before and I now have Evostar 102 - that did not get much use for various reasons. I purchased Evostar102 for variety of reasons - some among them are: - as lower chromatic aberration version of ST102 - to do various tests on this telescope - check suitability as all around instrument - this means planetary and lunar observing and imaging and DSO observing and imaging In the mean time I got myself SkyMax102 as well. Here are my thoughts of Evostar102 so far - it is rather big, it is not very stable on AZ4 mount, although I have version with steel tripod - this might change when I switch to SkyTee2 at some point as my heavier AltAz mount. For planetary and lunar, as well as grab'n'go - I feel that SkyMax102 is better choice. I miss ST102 as wide field grab'n'go instrument. Evostar 102 simply can't go that wide / low power (I do have reducer that I'm going to try to make work with, but so far it was not simple plug and play due to needed 85mm of spacing between reducer and EP). It is rather long tube and I would not call it portable - not in sense ST102 and SkyMax are (and ST80 would be, I'm sure). Another one of the reasons I got this scope as I was planning to get quark combo for Solar Ha observing. That now seems far less likely given mounting evidence that DayStar simply has exceptionally poor QA and customer support. Once I'm done testing Evostar for different purposes, I'll probably just pass it on to someone who wants it. If you want portable, easy to setup and use, grab'n'go type instrument and prefer wide field observing - get ST80 or ST102. If you on the other hand prefer lunar and planetary and occasional DSO - get SkyMax102. I would not recommend Evostar 102 scope that is portable, easy to setup and use and grab'n'go. It can probably be main instrument to a lot of people and has potential to do a bit of everything - but not well suited for this secondary role.
  4. Actually, you can do that with open source software like ImageJ that lets you do all sorts of things with images - even writing your own plugins/macros if you have some programming skills.
  5. I would actually advise you to merge two images while still in linear stage before you start processing. Take both images and remove background on both (mean background ADU should be 0). Then multiply short exposure stack with 6 (240/30 = 6) to equalize ADU values between them. Then do pixel math that is equal to this condition: if (pixel>80%_max_possible) use short_stack_pixel; else use long_stack_pixel; Or in another words - replace all the pixels that have value higher than 80% of max value in long stack with short stack. Ideally, you would replace only pixels that have max value - but because of stacking and flat calibration - clipped pixels can end up being less than max value - so you want to replace all of them that are "close" to max value (where close is arbitrarily chosen to be 80% and above - but you might as well use 85% or 87% - won't make much difference).
  6. I think that you'll find that perceived difference between Baader 7nm and Astrodon 3nm depends on where you compare them. If you compare them in relatively high light pollution then you'll notice that they are more alike than different - they both successfully remove a lot of light pollution - making drastic difference between using and not using them. This makes their relative difference appear small. On the other hand if you compare them in relatively low light pollution - then their difference becomes obvious. In low light pollution you are able to go deeper in a given amount of time compared to high LP and deeper you go - difference becomes more obvious between two filters.
  7. In reality sampling rate does not have that much effect on captured data - even if you miss by factor of 2 - it will not create huge difference for captured data as is. However, it does make difference in next step of process - frequency restoration. We end up sharpening our stack and here you can only sharpen: 1) what is there in the first place 2) what has been captured First point tells us that there is no point in over sampling - we can't sharpen up thing that is not there and if we over sample - we are not loosing much except that our image will still look blurry after sharpening - as we don't have that finest detail to sharpen up - it is simply lost due to physics of light Second point tells us - yes you can sample at lower resolution - but you loose any detail that was possibly there for good - you can't sharpen up what you did not capture For these two reasons I advocate trying to be close to optimum sampling rate. Btw, if you want to see the difference between twice lower sampling rate when sharpening is not involved, look at this:
  8. Yep, lambda is wavelength of light and I used 500nm or 0.5µm. If you image with narrowband filter like Ha - you can use that wavelength. In theory we should do it for 400nm as this wavelength of visible spectrum represents truly highest possible frequency, but in reality - blue part of spectrum is the most affected by atmosphere and going for 400nm would make us oversample quite a bit in red part of spectrum. There is almost 2:1 relation between highest and lowest frequency of visible part of spectrum (or shortest and longest wavelength) 700:400 = 1.75 : 1 - almost 2:1 - which in turn means that F/ratio for red part is significantly lower than optimum F/ratio for blue part of spectrum. Going for 500nm is just a good compromise for full spectrum planetary critical sampling. This is completely without influence of atmosphere. Atmosphere only lowers available detail, but we are doing lucky imaging and hope that we will minimize atmospheric impact. As such we need to image so as to capture any possible detail that aperture provides. Whether we will manage to do that on particular occasion depends on luck part of lucky imaging approach
  9. That calculator is trying to give you best sampling rate for long exposure astrophotography - much different topic than planetary. We can discuss that topic as well and how to use Nyquist sampling theorem in that context if you wish - but that is not something that you'll need to get best lunar shots.
  10. Yes, except people seem not to understand Nyquist theorem. For example: is flawed on several levels. Why would we connect seeing FWHM with Nyquist sampling frequency by factor of x2? What is "analog signal" and why would we want to image with resolution of 1/3 of that? Stars are never squares - that is fallacy produced by nearest neighbor sampling when enlarging images. Pixels are point samples and not little squares ... Etc ...
  11. It will be just fine with tracking motor. Star trackers have about same mechanical precision as EQ3 class mount - which means periodic error that is about 30-40 arc seconds peak to peak. That is usable with lens for up to minute or two before it starts trailing. Only reason why people prefer star trackers is portability - otherwise EQ3 is capable of the same - and even a bit more in terms of carry capacity.
  12. How are your DIY skills? There is wifi functionality in all these mirrorless cameras and I tested Canon camera control app for android - it let's you take images via wifi. Maybe it would not be too hard to do arduino project that connects to snap port on star adventurer and wifi of camera and controls shutter release?
  13. Micro lens are optical additions to pixels that increase their efficiency. It is piece of glass that acts almost like regular lens and is built on top of pixel photo site to increase effective photon gathering surface. It appears that shape and type of glass used with Panasonic sensor in ASI1600 creates interference type reflections with other optical elements in imaging train under certain conditions. Most of the time people get this with narrowband filters. Usually appears on very bright stars and looks something like this: I once saw it with my camera and OIII filter - although much smaller in magnitude: Sometimes people have it, sometimes not. I think it depends on other reflective elements in optical train like filters / reducers - whatever can create interference by reflecting light - and in particular spacing of these elements. It is for example interesting that people having it in narrowband images - don't have it in luminance although nothing else in optical train is changed. I would think that if it was some sort of interference on particular wavelength - it would be there with luminance as well as it contains all wavelengths - but it is more likely that it happens with particular filters and filter properties rather than in general.
  14. Well, I don't do much DSLR astro imaging so it did not occur to me to check all those options. My idea is to use computer to control DSLR / mirrorless much like I do with regular astronomy cameras. What I do like with M200 is ability to do remote shooting via wifi (at least I think it can do that). I originally wanted to get one of these mirrorless cameras for EEVA style imaging on AzGti mount which has wifi as well. It would be rather nice to have EEVA platform without cables - but it turns out that Canon largely disables advanced wifi features. For example my Canon 750D can't do remote shooting via wifi link - but I believe that to be disabled rather than non existent feature.
  15. I think that mirrorless offerings look rather nice. Same sensors as in regular DSLR cameras - but no moving mirror / mechanics, so lighter body and possibly longer battery life? Canon Eos M3 seems like rather nice camera. According to this: https://www.photonstophotos.net/Charts/Sensor_Characteristics.htm It has quite high QE at 59% and decent other characteristics. As far as I can see - it is no longer available but can be purchased second hand for a very good price. Out of current models, M200 seems rather nice and affordable.
  16. I think we are all correct in what we are saying. I also have example for all of this - rather exaggerated example - but quite revealing. This was taken with optics that is not well corrected at F/2 (Samyang F/1.4 85mm lens) with small pixels. Left to right, we have red, green and blue channel. It is obvious that green is actually rather fine and sharp - the sharpest and most sensitive of the three. Here is same region - converted into luminance by above formula, and processed as luminance. I don't think that sharpness in green helped much for overall sharpness. This is because of the way sensors work and because of how we process our data.
  17. Not sure which part you don't understand, so I'll do a quick intro on all three possible. 1. Sampling rate Telescope + camera is projection device and angle in the sky is mapped onto linear distance on the surface of the sensor. Scale factor related to this mapping can be expressed as arc seconds (measure of angle - 1/60th of arc minute which is itself 1/60th of a degree) per pixel. It can be thought of as "zoom" although technically it is not zoom (zoom is ratio of angles, and here we convert angles to linear distance) Above is formula to calculate it. Real formula should include Tan trig function but we are using small angle approximation that tan x = x for small angles (like mentioned arc seconds) for simplicity (above formula is good enough). It is useful both in planetary and regular imaging 2. F/ratio of telescope, sometimes also called "speed" of telescope is ratio of aperture and focal length. It is useful in these calculations because sampling rate depends on focal length and actual detail resolved depends on aperture size. It turns out that certain pixel size has fixed F/ratio for what is called critical sampling 3. Telescope resolution / critical sampling Maximum details that can be resolved by a telescope depends on aperture size. In fact there is relationship that defines maximum spatial frequency of signal at focal plane and is given as 1/ (lambda * f_ratio) - where lambda is wavelength of light and f_ratio is F/ratio of telescope. Nyquist-Shannon sampling theorem says that we should be sampling the band limited signal at twice its highest frequency (regardless of what you might read elsewhere - this applies to 2d case as well), we can derive expression for needed F/ratio for critical sampling. Critical sampling is resolution at which we can capture all theoretically possible detail for telescope of given aperture. If 1 / (lambda * f_ratio) is highest frequency component, then associated wavelength is lambda * f_ratio. We need to sample twice per that wavelength (two times higher frequency) - so our pixel needs to be half that. lambda * f_ratio = 2 * pixel_size f_ratio = 2 * pixel_size / lambda lambda is usually taken to be 500nm for broadband / RGB type of imaging, although you can put actual wavelength if you are using narrowband filter like Ha to stabilize seeing (in that case lambda would be 656nm). f_ratio = 2 * 3.75mm / 0.5µm = 15 So F/ratio for critical sampling with 3.75µm pixel size = F/15
  18. You are quite right - that is one of advantages of 294MM. In fact, I would advocate to shoot at 2.31µm and then bin later in software as that gives you more options like x3 bin if 4.63µm pixels are too small and go for 6.93µm instead.
  19. When you take X-Ray of a patient - you judge if there is fracture by eye, I agree - but that does not mean that you have X-Ray vision and can see that same thing without X-Ray machine. Camera is sensitive enough to show CA that can't be seen by naked eye at the eyepiece. Whether we look at the image or use other metric like FWHM difference between channels to determine chromatic blur / defocus is rather irrelevant to the level of that blur.
  20. I think we are talking about two different things here - one is deriving luminance from RGB if we want to make color image appear grey scale - then yes, rough formula that you quoted is right. However, parfocality of RGB filters and color correction of refractor telescope and relation to luminance data from telescope is another matter altogether. Above formula can't be applied to it.
  21. Composition of brightness / luminance. There is discrepancy between how humans perceive brightness of different wavelengths and how sensitive the sensor is. Human brightness perception is closely modeled in Y component of XYZ color space. Numbers that you have mentioned come from here: that is inverse transform from sRGB linear data to XYZ color space. It can be seen that R contributes with ~0.21, G with ~0.72 and B with ~0.07 or as you put it very roughly: But this is for sRGB data and human brightness perception. If you look at above curve - you'll see that our perception of brightness is more than x100 higher in green around 550nm than it is in blue / violet part of spectrum at 400nm - or in another words - 400nm wavelength needs to have about x100 more energy than 550nm to be perceived as equally bright. Another way to put it - when we observe and have two equal energy sources, one at 550nm and another at 400nm - we will see later being only 1% of brightness of former. Now let's look at QE of camera, for example ASI1600: Here we see that situation is quite different - sensor at 400nm has about half the sensitivity of 550nm. Camera will see equal energy sources at 400nm and 550nm differ by about half (a bit more than that as 400nm photons have higher energy so there is a bit less of them in equal energy source). This the reason why telescope that is completely color free visually - can still suffer from CA photographically.
  22. It is very similar. Binning is one of the way of reducing size - with very special property - it improves SNR by known factor and does not introduce pixel to pixel correlation. Other types of reducing size also improve SNR - but by smaller factor and not as predictable as binning and they do some things that introduces pixel correlation (more of a math stuff that should not particularly concern you). For full effect - binning should be applied on linear data. I just wanted to show that if you reduce the size (and lower sampling rate) - you'll get nicer looking stars. Something similar needs to be done with fast lens - they are just not sharp enough to be used with smaller pixels - but if you resize image down (which has same effect as binning regarding sampling / pixel size - you effectively increase pixel size by doing that) - things look nicer.
  23. Out of interest, could you point to image with ~90mm F/4-F/5 scope that has nice round small stars to the edge of APS-C sensor? It seems that many people are now interested in such setups, given that we have decent selection of APS-C and FF sensors available and refractors are also affordable - but somehow I feel that it is tall order to have small frac (a lot of curvature) well corrected to that sensor size. Then again - I don't have much experience on that topic - and maybe it is indeed very viable option?
  24. Well, we need to be careful of such statements. Yes, perhaps sharp stars in corners of full frame sensor if full frame sensor is using 9µm or larger pixel size. Again - this is corner of full frame - but reduced x3 in size - I'd say acceptable stars?
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.