Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

vlaiv

Members
  • Posts

    13,106
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. Calibration files are your friends If a dust speck causes a circular shadow, what kind of shadow could a thin piece of hair or similar produce? Indeed - it will be like very elongated circle - or rather line with circular endings - pretty much like feature you are asking about. It might be that it is due to that or it might be feature that is produced by bias signal - and removed with bias or darks. I have heard that some DSLR cameras have issues with AF pixels - it creates a line. Stacking can make this line appear in different places and result in thicker feature than single row of pixels. What ever the actual cause is - using calibration frames will rule many possible sources of this and either fix it or narrow down possible suspects.
  2. Yes, I believe so. I polar align using EQMod utility, which pretty much does the same as handset and a bit more - it turns reticle so that 0h (or even any other selected major marked clock - 3, 6, or 9) is at correct position for Polaris at that given time and long/lat coordinates. Only thing one needs to do is to make sure that 0h is at "noon" - or rather at top most point when starting procedure - this is because polar scope can be installed at any given orientation and 0h is not necessarily at "noon" when mount is at home/park position. This is fairly easy to do - one places Polaris at dead center and only using altitude adjustment then moves Polaris at the top - after that releasing clutches and moving RA axis so that 0h comes to Polaris. That way 0h is at "noon" if mount head was level of course.
  3. Looking forward to hearing about first hand experience on that.
  4. Can I make a few observations? You need to be more careful when doing measurements - what is it that you are measuring in the first place. For example - using single dark frame and doing standard deviation measurement on it is rather meaningless. Single dark frame contains: 1. Bias signal 2. Bias noise 3. Dark signal 4. Dark noise In order to really measure dark noise, you need to remove 3 other components. Using just ordinary numbers is fine, but you really want to convert your measurements in electrons - that way you'll get comparable results that can be tested against other things that you know. For example, you say that sigma of single dark of 200s is ~15. That does not tell us much, right? We know that read noise of that camera is around 1.7e or better (if you used unity gain or higher) and we also know that dark current at let's say -20C is ~0.0062e/px/s. In 200s exposure one can expect 1.24e of dark current signal and consequently square root of that as dark noise - ~1.1136e, which is interestingly enough - less than read noise of that camera. Neither of the two numbers are close to number 15 that you had. Simplest way to get read actual dark current noise measured is to set camera at unity gain - so you don't have to do conversion between ADUs and electrons (or rather just divide your subs with 16 because of 12bit format), take two dark subs, subtract the two dark subs (subtract second from first). Measure standard deviation, divide with square root of two (you stacked two subs when you subtracted them) and that will give you dark noise and read noise combined. Do the same for two bias subs to get read noise. Remove read noise from above dark+read noise and you'll get pure dark current noise that you can use to estimate dark current if you want. That way, you'll get two numbers that you can compare to published figures of 1.7e of read noise and 0.0062e/px/s for dark current. Just to show you the difference, here is what standard deviation of single dark sub for ASI1600 gives and what you get as combined dark + read noise from two dark subs. First measurement is of single 60s / gain 139 / offset 64 sub. StdDev is 2.275 and mean value is ~62.8. Mean value is clearly offset from 0 so it's not pure noise it contains some signal as well (bias + dark current). Second measurement is difference between two subs. Mean is now ~0 which is what you would expect when there is no signal but pure noise. We now have to divide stddev value with sqrt(2) to get actual value, and it is: ~2.81 / sqrt(2) = ~1.987e We see that bias and dark signal were not uniform and had some variance to them as they increased stddev from 1.987e to 2.275e - we need to be careful to measure proper thing. Now, I don't have any bias subs on me, so I'm just going to assume 1.7e of read noise and try to calculate dark current from that and 1.987e combined noise to see if we get comparable results to published ones. dark_noise = sqrt(1.987^2 - 1.7^2) = ~ 1.0287e So we get that our dark current noise is 1.0287e and associated dark current will be that number squared so ~1.058169e This is for 60s sub at -20C, let's see what value do we get for single second as 1.058169 / 60 = ~0.01763615e/px/s That is about x2.5 higher than published value, but I suspect that difference is due to me taking 1.7e as read noise, when in fact it is higher than that - around 1.8 or 1.9 for unity gain - usually ZWO published read noise values are rather optimistic. If we repeat calculation with 1.9e as read noise, we will get: 0.00563615e/px/s which is much closer to published 0.0062 and even a bit lower, so real value of read noise is probably a bit less than 1.9e - but like I said - best to be measured rather than assumed from graphs. Graph/published values are just good comparison reference value and should be taken with a grain of salt.
  5. I have to say that I like your processing the best. Do you use calibration frames? You should really calibrate your image if you have not done so (and it looks like you did not). Other than that - I do object the shape of stars in the corners and overall appearance of the stars and I don't think that C6 is very good scope for what you are trying to achieve. Have you considered using other similar scope - I can list at least two different scopes that will be better yet remarkably similar to C6. This one: https://www.firstlightoptics.com/ioptron-telescopes/ioptron-photron-6-ritchey-chretien-telescope.html or this one: https://www.meade.com/telescopes/acf-cassegrain/lx85-series-6-acftm-ota-only.html
  6. I did not know this, but I think it's my new favorite feature You just fire up the mount, select either planet from solar system, manually point the scope and select Point and track, right?
  7. That is a good question, I know that there would be no diffraction spikes, but I don't know what total impact on the image would be. We can run tests to see. Let me do that and I'll discuss the results. So airy disk would be actually smaller - but rings would be quite distorted and there would be split of rings into 4 zones. This is image of airy disks without stretching - I'm now going to stretch the images to show impact in faint areas: While central disk seems to be narrower - I think that overall blur will increase and you'll decrease resolution / resolving power of that telescope somewhat. Probably best way to asses what is going to happen with resolution is to examine MTF of both, so here is that: MTF is rather strange looking and it depends on how frequencies are "oriented" - in another words image of a star is not round as we have seen but has some squareness to it - diffraction rings are broken in certain places and it might visually look more like little square than a circle under very high magnification with this mask, but examining MTF reveals something interesting. View will be dull and washed out, but you won't loose much of high frequency detail - you'll still be able to split double stars - possibly only problem will be with orienting telescope - resolution will be better in certain orientations than in others - if you have two squares they can either be turned towards each other with their side - then you can place them closer or with their corner - then corners will overlap if you put them that close. Again I think this sort of mask is worth a try since we can't really tell how strong this funny looking diffraction rings will be - from first set of images, they don't really look that strong and might not detract too much from the view. From MTF diagram, I can see that medium and lower frequencies will be attenuate quite a bit - which means very low contrast on lunar planetary, but higher frequencies behave the same (both graphs behave almost the same from 500 on wards - to the right - those are highest frequencies of the image).
  8. Just a couple of thoughts that you may find helpful: 1. M101 is rather faint object and this is only ~5 hours of data 2. Mak127 has rather long focal length and paired with your camera - you are over sampling. This means that both dark noise (non cooled camera) and read noise will have more impact. It is good that you decided to go with longer exposures of 5 minutes, but do consider binning your data as a processing step. 3. Give Gimp 2.10 a try over Paint.net and others - it is very good software for image processing once stacking is done and it's free 4. Don't think in terms of focal length and field of view when thinking high resolution imaging. Think in terms of what is realistically best resolution that I can achieve and then what is the largest aperture that you can achieve it given limitations of your camera and mount and all of that (do consider different camera as well - sometimes investing in camera makes the most difference) - also, think binning as well - even if software as a part of equation of resolution / sampling rate.
  9. Exactly. Both masks will have rounded edges which is important to spread the light and prevent spikes from forming, but this one that only covers the stalk blocks much less light.
  10. This is really not easy thing to explain as it involves quite a lot of math an physics, but let me try the simplest possible explanation. It is about perpendicularity. When you have straight line - at each single point light gets diffracted perpendicular to that line. Long straight line just means a lot of light gets diffracted in just one direction - perpendicular to that line. All of that light gets "summed" (because telescope focuses light) so it all ends up in one straight line of light. This is what you see - your stalk produces single line of diffraction. Curved shapes and circles in particular are interesting because at each point along the circle light is also diffracted but perpendicular direction at each point along the circle is at slightly different angle - it is no longer all in the same direction. This means when you sum the light it no longer creates a single line but is spread around in a "halo". If you think about it - telescope aperture is an edge too and it also diffracts light - it spreads it in a "halo" and when that "halo" gets focused it produces Airy pattern because aperture is circular and not straight. Secondary obstruction does the same it also produces halo and this halo gets focused and when combined with aperture edge - produces slightly different Airy pattern - moves a bit of light from central part into rings. We want to remove diffraction of light from a stalk and spread it around so we take a bunch of circles and cover stalk with those. Now these little circles are doing the same as central obstruction and since stalk is no longer in light path - it's not doing it's own diffracting of the light and prominent spike is no longer there. That would be explanation. Confirmation that it works can be done in two ways - one is mathematical analysis and other is experiment - rather actually doing it on your scope and looking at a bright star. I can do math stuff for you (don't worry it won't be actual numbers - but rather images) but you'll have to do hands on approach if you are interested to try it out. I'm now going to show you 4 different telescope apertures and diffraction effects that they produce. This will be done via simulation (Fourier optics approach). First of course is clear aperture: Left is image of aperture and right is image of star - Airy pattern is observed of course and no diffraction spikes ... Here we see what newtonian scope with central obstruction and 4 spider vanes looks like. Airy disk is now a bit different as light distribution is different and we now have 4 diffraction spikes. This is configuration with single stalk - there is only horizontal line and diffraction / spikes are actually smaller than in 4 vane configuration because length of edge that is is doing diffraction is shorter (single stalk vs 4 vanes of same length). And here we see what happens when we add a mask of bunch of small circles to cover that stalk. We again have no prominent spike (I really stretched that image to show what is there). We can sort of see that we get like bunch of small spikes in every direction and that is the point - there is no single spike but rather "infinite" number of spikes - each one at a slightly different angle and all of them so faint that you won't see them. Just make sure that you do your circles as round as possible. If you don't get them round enough, you'll get something like this: Instead of nice faint halo (this is very long integration with camera on a bright star and it captured every little detail) you'll get distinct spikes. This happens if your circle is not perfectly round but rather consists out of little line segments - each little line segment will make its own small spike Above image is because I used PVC aperture mask and I cut a hole in it with a saw so it was not very round and I did not bother too much with a file / sand paper to make it perfectly round. Just to be clear - above image is not of this array of circle aperture mask - but a regular aperture mask I made for my achromatic refractor, it just illustrates my point about circles needing to be round.
  11. I have Mak102 and only thing that I can say so far is that I have not been really paying attention to this above what I usually do - take the scope and leave it for about 20 minutes to half an hour while I'm getting ready to observe or image. I could sort of extrapolate that and say that you'll be fine with 30 to 40 minutes of cool down time as a worst case scenario, but someone who actually has this scope needs to confirm that.
  12. What's the budget? £500? Very simple solution would be: https://www.firstlightoptics.com/sky-watcher-az-gti-wifi/sky-watcher-skymax-127-az-gti.html
  13. You can always try to do simple DIY option like barn door tracker or EQ platform. These won't be precise so you'll have to limit your exposure length to something short - like 5s or so and take plenty of them. Good thing is that ST102 can be used to do DSO photography. It won't be easy thing to do since it is fast refractor with rather poor focuser, but there are youtube videos on how to get the most out of the focuser (tighten it up and such). Here is example DSO image taken with ST102: Trick with removing chromatic aberration is to use Yellow #8 wratten filter and aperture mask. I think this was done with 66mm mask. Yellow filter obviously affects color balance but you can fix that in post processing. Just accept that you'll have to do wide field shots that won't be particularly sharp due to DIY tracking and you'll be ok. I think it is not bad way to get into AP - you'll learn the technique and processing which is just as important as good tracking and good scope.
  14. Just forget imaging planets with this scope. You can remove almost all chromatic aberration by using aperture mask. You can try this for visual - just observe with 2" opening on your scope cover removed and you'll see nice planets. However you'll be limited to less than x100 magnification because small aperture cannot capture details that large aperture can. This also means that any sensible image of planets will be very tiny. Planets are imaged by taking enormous number of very short exposures and for that you really need tracking mount. Don't bother with x4 barlow either - it will be overkill for that scope as well. You can however shoot nice images of Moon with that scope and I think that is what you should concentrate on until you get suitable scope. Here is a moon shot with ST102 scope and DSLR type camera: There is a bit of chromatic aberration visible but one can live with that - yellow fringing on the moon surface on the right and purple fringing on the outside. If you want to do anything serious with planets you'll want different scope - it needs to be tracked and it needs to be sharp - it does not need to be expensive. Look what Mak102 can do on moon and planets: Do click on the image to enlarge it - view it at 100% it is worth it. Small scope will still capture rather small planets like these: There is no way around that - if you want larger images - you'll need larger scope.
  15. Not to mention that ST102 is F/5 achromat that has enormous chromatic aberration - you'll shoot just a blurry mess of color rather than a planet.
  16. Unfortunately, this is DIY thing as far as I know. Actually scratch that, according to this thread: You can apparently order those online, like from here: https://www.fpi-protostar.com/crvmnts.htm If you want to DIY those, you'll need a thin piece of metal that you'll bend into shape (smoothly) - 180 degrees one works good. Things to remember when choosing spider support - you need at least 180 degrees arc - less than that and you'll have only partial diffraction - only in certain directions and not evenly spread. Length of support is related to strength of diffraction - keep the length at minimum. Make sure all "degrees" are covered equally - having for example 270 degrees which will be 180 + 90 degrees - these 90 degrees will bias diffraction to one side. There is a free software that will calculate resulting diffraction from central obstruction (and support) - let me see if I can find that for you. Unfortunately I can't seem to find it, but @sharkmelley used it back in 2013 to explain some phenomena so could possibly provide a link to that? (I found link to software in the mean time): https://www.cloudynights.com/topic/547413-where-to-find-maskulator/ I just thought of a very quick way to remove diffraction spike on Heritage 130p - it is very easy thing to do and in principle it is "external" to scope - no modifications needed, but you will need to make aperture mask. Take cardboard and make mask that you will place over the stalk that holds secondary mirror. Be sure that you have nice clean cut. So it is just a series of circles (can be smaller or bigger) that you tape together to cover the stalk. Just be sure you secure them in place and they don't fall onto your primary mirror. Black cardboard works the best of course. This will remove diffraction spike.
  17. I agree that color perception will be different under difference circumstances and that is why we have standards - like sRGB. Images are supposed to be encoded in sRGB standard when viewed at computer monitor because that will render most closely resembling color as if object was viewed in conditions defined by sRGB standard. These parameters correspond to viewing conditions most people use computer monitor in. Dimly lit room with neutral background (white wall within dimly lit room), etc ... This is irrespective of perception and it is related to spectrum of recorded light. You are partially right that one needs to examine spectral content of the light - but luckily our vision is based on tri chromatic system and we don't need a full spectrum in order to reproduce color. We just need to provide three values in particular color space. For CIE XYZ space, here are matching functions: If you compare that to your camera response, you'll notice that these functions are different. Even if we take CIE RGB space (not to be confused with sRGB space): Still not the same. First step would be to create color transform matrix from camera space to some absolute color space like XYZ. After that we have standardized transforms to common color spaces like sRGB. Depends on what your image represents. Does it represent an image that observer would see at an eyepiece of a telescope somewhere on the earth (thus looking thru earth's atmosphere) or does it represent image that observer would see when looking thru a telescope in orbit - or floating in space some distance away from Jupiter? Both cases can be represented in such way that "observer" sitting in their dimly lit living room and looking at properly calibrated computer screen would see the same color as their respective counterpart. One just needs to apply proper set of transforms for their camera and wanted outcome (in case of telescope on earth - simple camera -> XYZ -> sRGB linear -> sRGB gamma encoded, while in case of observer in outer space camera -> remove atmosphere influence (blue scatter attenuation) -> XYZ -> sRGB linear -> sRGB gamma encoded). Most people do following to their image: raw_camera_values -> null transform to sRGB -> color balance using some some automatic algorithm No proper encoding of gamma in sRGB, treating camera RGB values as if they were sRGB linear values and doing automatic color balance based on algorithm (usual grey world or similar algorithm that just assumes that all RGB components should be equally represented in regular image and that on average world is "grey" - very poor assumption when you have subject like Planet that can have distinct dominating color). Now we have a huge problem - problem of what people are used to. If you do yellowish Jupiter and explain to people that this is what Jupiter looks like thru the eyepiece - they will say that it "does not look good" because it is different than most Jupiter images that they've seen in their life - all being wrong because all of them are done with same wrong processing.
  18. Never fully understood this stance. Color is what it is - take an object and image of that object and look at them and if color is the same - it is proper color rendition, if it is not the same - color is off, simple as that. Fact that we can't take Jupiter and place it next to our image of Jupiter does not change the fact that Jupiter has certain color and we should be able to capture it properly. Take two images of Jupiter and if you process color properly - they should look the same color wise - have same hue, same saturation, etc ... We don't even have issue of lighting here - planets are always illuminated with same light source and we know spectrum of that light source.
  19. Just realized I did not answer the question, but since I don't do double stars much, I can give what I think would be probably best beginner double star instrument taking all things into account (budget also). 6" F/8 newtonian with thin curved spider and 20% secondary obstruction would be my choice.
  20. Not all reflectors have diffraction spikes - even newtonians. Diffraction spikes are feature of straight spider vanes. Curved secondary support when done properly does not produce spikes and many people that don't like spikes for planetary or double star work modify their telescope with curved spider vanes.
  21. Ok, this is going to need a bit of explaining again, but this will by the easiest bit by far so bear with me. Here we are dealing with "digital" phenomena. Both image is digital as being collection of numbers or pixels and display device is digital - again having certain number of display dots or pixels. Both have certain resolution - here term resolution is used to denote number of dots / pixel count or "megapixels". Easiest way to display image on the screen is to assign each pixel of the image (a number value) to corresponding pixel of display device (light intensity corresponding to that number value for each display dot). This is so called 100% zoom level or 1:1. First of these two gives "scaling factor" (we will talk about that in a more detail shortly) while other just says the same with ratio of two numbers 1 divide with 1 gives 1 and that is a whole value or 100%. Problem with this approach is that not everyone has same resolution display nor images all come in same "standard" resolution. Images vary by size (pixel count) and so do display devices. We now have 4K computer screens while old mobile phones have resolutions like 800x600. That is quite discrepancy between them. For that reason we scale image - we don't change it - it remains the same, but we rather change mapping between it and the display screen. Most notably there is "fit to screen" scale mode. It will fit whole image - however large to pixels that are available on screen. Let's say you have 1920 x 1080 computer monitor and you have 5000x4000 image. Image will be displayed at 26.74% of its original size in order to fit this screen. Or in another words it will be displayed at 1080 : 4000 ratio (it will use 1080 pixels available to the screen to display 4000 pixels of image). This means that image is scaled down for display - again image is not changed - it is only displayed differently. Numbers in PI in window title like 2:1 or 1:5 - show current display zoom in the same way. First one 2:1 - means that image is zoomed in x2 since 2 screen pixels are used to display single image pixel. 1:5 means that image is displayed at 20% of original size since one screen pixel is used to display 5 image pixels (it won't display 5 pixels at the same time - software and computer chooses which one of 5 pixels will be shown but to our eye it looks as it should). Now we understand display scaling - 1:1 and fit to screen "modes" of displaying the image. Two of these are very important - 1:1 and Fit to screen. Fit to screen shows whole image at once regardless if image is larger or smaller than screen - it will zoom in/out just right amount to be able to display all of it. It is very useful for viewing image as a hole to see composition in the image and to see relation of target to FOV and such. 1:1 is also very important as it shows image to the "best level of detail" that current display device can show. If you look at your screen with single pixel turned on - you should be able to see it. Computer monitors are made so that you are seeing finest detail in the image when sitting at normal distance away from it. We should make our images the same - we should optimally sample the image and when we display such image in 1:1 mode - it should look good and sharp. You can always recognize if image is over sampled when looking at it at 1:1 zoom level. Are stars small and tight / pinpoint like or are they "balls of light"? Now the final words related to this. Software does not see pixels as little squares or dots of light. Software considers pixels to be values with coordinates. In some sense, software always see image as 1:1 or 100% scale. When you zoom in or zoom out image when looking at it in PI - you are not changing the image itself. That will have no impact on how Starnet++ sees it for example. If you rescale your image in software then you are actually changing it and rescaled image will appear differently to Starnet++. Rescaling changes pixel count of the image, while zooming in and out does not do anything. Drizzle integration rescales your image - makes it have more pixels (x2 or x3 - depending on your settings) and this is why it looks larger - because it is larger, however detail in the image does not change (it is supposed to change - that is why algorithm has been developed in the first place - but even if it does change and resolution is restored - that happens under very specific circumstances - like original image is under sampled and you dithered and all of that). It is the same as if you took your image and upscaled it by factor of x2. Stars now have more pixels across and when we view that image in 1:1 or 100% (As software sees it) - stars look bigger than in original image. Starnet++ has probably been trained on properly sampled images with tight stars and that is why it has problems when stars have many pixels across - it just can't tell star from a nebula feature that has many pixels across (it expects stars to have just few pixels across). Hope all of this makes sense and helps?
  22. I just realized that there is probably more down to earth way to explain things. For example first part relates how different things that blur image add up. Here is intuitive way to understand it (and to try it out) - take any image and do gaussian blur with sigma 1.4 for example. Do another round of gaussian blur with sigma 0.8. Resulting image will be the same as if you did just one round of gaussian blur with sigma of not 2.2 (1.4 + 0.8) but rather of ~1.6 which is sqrt(1.4^2 + 0.8^2). So blurs add in quadrature and three main blur types are seeing blur, guiding blur and aperture blur. In any case, it is a bit complicated stuff, so main point is - you can't really use lower sampling rate than about 1.7"/px when using 80mm of aperture and in most cases you should go for 2"/px - that is if you don't want to over sample. I honestly don't know. I understand drizzle algorithm. I have my doubts if it works at all and how good it works in general case - but that needs further investigation. I have improvement on original algorithm that should work better if original algorithm works in the first place (yes, I know, it's funny to improve on something that you don't think works in the first place). However, I don't understand how drizzle is implemented in pixinsight - so I can't comment on that one. I think that DSS implementation is straight forward, but interestingly enough - original algorithm calls for 2 parameters not one so I don't know how ticking x2 translates to two parameters. Original algorithm asks for - resulting sampling rate and pixel reduction factor. x2 is directly related to resulting sampling rate. It will enlarge image x2 hence it increases sampling rate by factor of x2. However, you don't need to reduce original pixels by factor of x2 - you can reduce it more or less as per original algorithm and I have no idea what selecting x2 (or x3 in DSS) does with this parameter.
  23. They look the same - this means no improvement from drizzle, but they are not the same with respect to starnet++ Left image is zoomed in x2 while right one is zoomed in x4. As far as Starnet++ is concerned - stars are twice as large in left image than in right image.
  24. Ok, I'll be brief and to the point since this is rather technical, but here it goes: - we can approximate resulting star FWHM from aperture size and corresponding Airy disk, seeing error and guiding error. You need to convert everything to "sigma" of corresponding Gaussian approximation. Guiding error is already given as that. Seeing is converted by dividing with ~2.35482 (that is two times square root of two times log of two - a conversion factor between the two FWHM and sigma for Gaussian) and Airy disk is converted to sigma by conversion factor of 0.42 / 2.44 - this is for Airy disk diameter. You take these three and calculate resulting sigma as square root of sum squares. Multiply with 2.35482 to convert back to FWHM - As for sampling rate, it is about Nyquist sampling theorem and star PSF gaussian approximation of certain sigma/FWHM. We can get that with following approximation: Fourier Transform of Gaussian is a Gaussian. We take a frequency of Fourier Transform that has value less than arbitrary threshold - for example 10% and we see what is sampling rate that corresponds to this frequency (twice max frequency). If we do the math, it turns out that FWHM/1.6 is a good approximation as frequencies beyond that are attenuated more than 90% and hence can be neglected.
  25. Excellent! I also have Heq5 that I tuned and belt modded myself and indeed it runs at about 0.5" RMS when sky plays along (had it once at 0.38" RMS!!). Astro tools gives general advice and I don't agree with some of their calculations and recommendations. One of those being sampling rate. Here is example: In no universe will any amateur with scope less than 8" benefit from sampling rate of less than 1"/px, but they say 0.67"/px is fine. For 2-4 FWHM seeing and 80mm scope, even with excellent guiding of 0.5" RMS, star FWHM that you can expect will be 2.71" to 4.4". With those star FWHM, sampling rate should be (FWHM/1.6) in range of 1.7"/px to 2.75"/px and not 0.67"/px-2"/px You are at lower bound of that so there is a chance that you are slightly oversampled rather than undersampled. General rule is that for 80mm scope you want to be at around 2"/px. If you wish, we can go into a bit more detail on sampling resolution and FWHM and all of that so I can explain reasoning behind what I've just written but it is a bit technical,
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.