Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

vlaiv

Members
  • Posts

    13,030
  • Joined

  • Last visited

  • Days Won

    11

Everything posted by vlaiv

  1. @Adreneline Above idea of yours is actually much better way to asses tracking / periodic error as it gives better resolution for that sort of purpose. Let's say we have 50cm aluminum bar that is as wide as regular vixen dovetail (about 44mm) and say 10mm high or something like that. It should not flex much if at all. At 50cm we have x12 less resolution than at 6 meters, and from above calculation 1 arc second is 0.03mm at 6 meters, so it will be 0.03 / 12 = 0.0025 at 50cm. That is about 1/4 of what instrument can read - but let's say we read every second - so we have movement of 15 arc seconds - so that is 0.0025 * 15 = 0.0375 and that should easily be visible at the gauge. Since we have 10mm of total motion - that is about 266s - not bad at all.
  2. That is also neat idea. How did you rig everything up? Did you use some sort of lever and if so - how long was it and how did you ensure against flex?
  3. Just occurred to me that I don't need to use less magnification - I can just project at closer distance
  4. And of course - it works Tested with green laser pointer and SkyWatcher 50x8 finder (or whatever its magnification is x7, x8 or x9 or somewhere in between).
  5. I was not sure if I should post this in mount section or here. It has to do with assessing mount performance - how well it tracks, without the need for wasting clear skies or always having doubt if seeing is responsible for measured mount roughness (although with multi star guiding this is now much less of issue). I personally came up with it for the need of testing how smooth 3d printed reduction gears work combined with stepper motor - to get the idea of positional accuracy, backlash and any sort of periodic or non periodic error in tracking. Initial idea was to simply strap the laser pointer on top of axis and monitor what the laser point projected on white surface far away from motor is doing (maybe use millimeter grid paper or something like that or record with camera and analyze footage). However, as you will soon see - this does not really fit into "in house" criteria. One degree of arc is roughly 1 meter at 57.3 meters of distance. This further means that one minute of arc is 16.667 mm at that distance and of course, one arc second is 1/60th of that - which is ~0.3mm. Not really something you can easily measure - at least when light dot projected on the paper is in game. Let alone the fact that "in house" distances need to be at least x10 smaller and everything is scaled down x10 - so movement of one arc second would be 0.03mm - now that is 30 microns movement at 6 meters. I played with all sorts of different ideas in my head of how could we amplify everything. Maybe using mirrors to create larger distance by bouncing light ray several times off the mirrors - but every imperfection in mirror surface would be amplified as well - so we would need optical flats of high quality and way to align them properly - too complex for simple "in house" device. If only there was a way that we could amplify light angles easily Well, that was the question I asked myself just before light bulb moment (sometimes its worth just asking the right kind of question). We have and often use (at least when weather allows) devices that are great at amplifying light angles So here is what I came up with (still need to test it, but I think it will work as I expect). Shine a laser thru the front objective of a telescope with the eyepiece at the other end in straight thru configuration (or even use a prism if we want 90 degree bend for some reason). Telescope should do what telescope does - it should amplify incoming light ray of collimated light (it needs to be focused at infinity - but we can easily tweak the focus to get smallest dot on the projection screen - no need to "prefocus" on stars or anything like that). Depending on focal length of telescope and eyepiece used - we can have significant angle amplification. Most of things will happen near optical axis so we don't need wide angle eyepieces - in fact we want as low distortion as possible. We can change eyepieces change magnification of the effect so we can measure different behavior - for backlash and positional accuracy we can use x200 for example to get down to arc second resolution, but for tracking we need arc minute resolution as sidereal is ~15 arc second per second - so we might want to have enough of screen to capture few minutes of tracking and that would mount to say 200x15 = 3000 arc seconds - so we need less magnification for that. In any case - if we have 6 meters distance to projection screen and use x200 - angle of laser beam won't be 1 arc second but 200 arc seconds instead, so deflection won't be 0.03mm but x200 larger - or 6mm - now that is easily measurable with millimeter grid paper. However at that magnification we would need 3000 x 6mm = 18 meters of screen for few minutes of tracking - clearly not good idea, but we can drop magnification for that purpose to say x20 or even less - depending on what we have at disposal (maybe even use finder that is x7 magnification for this purpose). We can even create setup that amplifies just x2 - x3 by combining two eyepieces - one would have "telescope" role and other would be regular eyepiece. 32mm plossl and 12mm plossl (or 17mm one, I'm just listing ones I have on me ) could give interesting combinations. In fact - If I pair 9-27 zoom with 32mm plossl - I can get range of magnifications for this. Anyway, all that is left to do is to try it out (I might just do that now as I have laser and finder on the desk with me). What do you think about the concept?
  6. Dark filters are useful for remote setups and setups used by different people with different requirements for light exposures. Instead of creating every dark imaginable or set of darks and limiting exposure lengths to only generated dark exposures - one can use dark filters in their filter wheel. That way anyone using the remote telescope can take matching darks to their particular exposure (and other settings). However, in amateur conditions - it is much more sensible to do darks with camera off the scope - in basement or other dark room while it is cloudy outside. That lets you take large number of darks (to minimize noise impact).
  7. I was going to suggest alternate name for it : polyscopy, but then I realized it sounds way too much like a very nasty medical exam
  8. This might well be true. I've seen significant drop in LP after say about midnight or 1am when most human activity drops significantly. Less traffic, less lights from houses.
  9. https://www.firstlightoptics.com/adapters/astro-essentials-1-25-inch-t-mount-camera-nosepiece-adapter.html
  10. Yes, Autostakkert!3 is the norm (there is even version 4 coming out, but it is still early beta stage, so for the time being - stick with v3) for stacking. Most people use either SharpCap or FireCapture for capturing. Use about 3-4 minute videos (use SER format). Capture in Raw / don't debayer at the moment of capture - software will do that in special way when stacking. Use about 5ms exposure, even if image seems too dim - again, after stacking you'll process the image and it will look nice. You need something like 30000-40000 frames and then you can decide how much you want in stack - usually around 5-10% but that will depend on seeing conditions. Use Registax 6 or AstroSurface for wavelet processing (sharpening) of the image.
  11. Yes, it is possible. Do look up a few tutorials on youtube of how to do it (lucky planetary imaging). Use ~F/10 (so x2 barlow), and use ROI to achieve good frame rate - you don't really need more than 640x480 for the planet or possibly 800x600 for Jupiter with its moons in one shot.
  12. ADU just means pixel value in some "analog / digital units" - term used for measured pixel values that are not photon or electron count but some number value after gain has been applied (ISO setting) and after A/D conversion has been performed by camera. In this context you should read it as "value in 0-65535 range" that we get when we examine raw image file pixels.
  13. It really depends on what data are you trying to calibrate. I'll assume the following: - you have DSLR - your DSLR has automatic dark current removal. You can test this by taking two darks. One very short (say one second) and another rather long - say 30 seconds or so. Both images need to be true darks. Try to avoid any light leak or even IR leak (infrared can penetrate plastics). On DSLRs, be sure to use viewfinder cover to block any light getting in that way. Best to take subs in very dark room without any light. Once you have your subs - open them in any software that loads raw files and gives you access to raw data and simply measure average ADU value of them. If both subs have same average ADU value - you have automatic dark current removal (otherwise average ADU value of longer sub should be higher as it has more dark current signal). If above is all true - then calibrate as follows: - shoot lights - shoot bias (which are darks at minimum exposure length) - shoot flats - match ISO setting between all three. When shooting flats - avoid clipping. Histogram should show three nice looking peaks at the center or 2/3 to the right. Stack bias to master bias subtract master bias from every flat and every light Stack flats (with bias removed) into master flat Divide each light (with bias removed) with master flat. Ideally, software that you are using should do above for you if you provide it with said files automatically.
  14. Thing with flats is that they work properly when applied to light signal only. You can't have other signals present in the image or master flat in order for it to work. Dust and vignetting reduce amount of light by some percentage and in order to get the right amount of light - you need to divide with that percentage again (first time it multiplies and second time you divide when calibrating and two cancel out). But this whole thing works - only when there is no other signal present, otherwise, it won't work completely - or it will either over or under correct. For example - say that you have 600 units of light and dust shadow only passes 75% so you end up with only 600 * 0.75 = 450 units of light hitting your sensor. But you want to correct that and you record master flat which records 0.75 (this is for purpose of demonstration - it records other values but this is how it essentially works). Now you divide your image with flat and get 450 / 0.75 = 600 All is good, right? But what happens if you have say some dark or bias signal that you have not removed from your image? This means that instead of 450 you actually recorded 470. 450 is light signal and 20 is some other signal - be that dark signal or bias signal, it does not matter. Now when you try to correct with flat - you get 470 / 0.75 = 626.6666 We have brighter image than we should be having - this is over correction by flats There is another case that can happen - maybe you forgot to remove bias signal from your master flat In that case you won't have 0.75 as your master flat but something like 0.77 - 0.75 being light part and 0.02 being bias part. Now if we try to correct we have 450 / 0.77 = 584.41... This value is smaller than 600 - we have under correction. We can even have mix of the two - if you don't remove residual signals from both lights and flats, Just using flats will correct things - but it won't correct fully and how much you'll have issue because of this residual signal - depends on how big that residual signal is compared to light signal and flat signal. If you are using DSLR - you can just use bias as there is dark compensation thing happening in the camera (most modern sensors do this).
  15. No, they don't eliminate the noise - they eliminate dark current signal. Noise remains in the image. They are reusable only if have set point cooling and can reproduce temperature. If you use them, bias files are not necessary as bias signal is contained in dark subs That is pretty much correct - include flat darks for best results (darks that match flat exposure and which you subtract from flats). Flats are reusable if you have permanent setup or have electronic filter wheel with good repeatability or OSC sensor and you don't dismantle your optical train. If you for example pack after each session but leave the scope and camera attached as a single unit - you can reuse flats Calibration files don't remove noise - they remove signal. Bias files remove bias signal and they can be used if: 1. You have modern DSLR that has dark subtraction built in. This will remove dark current without bias so you manually have to remove bias afterwards 2. You plan on using dark scaling / dark optimization when calibrating subs 3. You use very short flat exposures then you can use bias files instead of flat darks I advocate use of larger number of calibration subs. As much as you can shoot without too much inconvenience. Calibration subs don't remove noise - but they do introduce new noise in the image. More calibration subs you have - less new noise you'll introduce into final image.
  16. Depends on several factors. First will be aperture size of both telescopes. Is it the same or does one scope have larger aperture than the other, and if so which one? Second - optical quality of both scopes. Is refractor achromat or apochromat? How fast is maksutov / what is the size of central obstruction? All those will contribute to differences / similarities of double star image between two scopes. If we say - let's take academic case of perfect telescopes with the same aperture, then maksutov will have slightly brighter diffraction rings and slightly less pronounced central airy disk. It will be very small difference visually. Ability to split stars will of course depend on observing conditions / seeing and difference in magnitude between two stars and their separation. In theory, in some edge cases - ideal refractor will have slight edge over maksutov - if there is significant difference between intensities of double star components and stars are separated so that fainter star lands exactly on first diffraction ring of brighter star. In all other cases - you should be equally able to split / not split pair with above two scopes (optically ideal, same aperture size, same viewing conditions).
  17. Use L bracket to mount the scope to the mount? I sometimes use that with my Mak102
  18. Some sort of internal reflection due to quantum nature of light. Can be for example from camera front window. Although it is AR coated (anti reflex) - that just reduces and not completely eliminates reflections. Regular glass will reflect around 4% of light and AR coated brings that down to less than 1% - usually around 0.1% (again, if I'm not mistaken). Very bright star will have many magnitudes higher brightness than say surrounding nebulosity - it can be even 10 mags of difference - which is x10000 more bright. 0.1% of x10000 is still x10 brighter than nebulosity and must show on image. You can verify that it is reflection by the way this bright circle behaves - it will be offset from the center - further star is from the center - more offset (away from center) it will be. This halo is actually image of that star slightly defocused as light that reflected couple of times traveled larger distance and is no longer in focus.
  19. This is quite cool. With a bit more code - it can be turned into "trajectory" in "all sky map" sort of thing (just a projection of measurement points onto circle that represents fish eye lens view of the sky).
  20. This will be slightly off topic, but for anyone interested - Sky Quality Camera appears to be commercial software made by Euromix d.o.o in Slovenia - but I'm unable to find any official way of obtaining that software. I just found a bunch of mentions in academic literature / papers that are written on topic of LP but can't find anything else on that software.
  21. There is a tool called Sky Quality Camera that can measure the whole sky for LP levels. All you need is particular software, DSLR and fisheye lens. I have no idea how to get it, or is it available to general public. You can read an article about it here: https://www.boisestate.edu/physics-cidsrsn/2022/06/27/sky-quality-camera-a-tool-to-track-and-analyze-light-pollution-in-the-cidsr/ There are also some measurements on light pollution info website taken with this method (filter for SQC). Here is measurement from 2019 made just couple of km away from me:
  22. Here is idea of how to reduce amount of data. Not sure how long are exposures, but if you simply calibrate your subs from each evening and split them into groups of consecutive say 4 or 5 subs and simply add those subs - it will be like you took longer exposures. If you image for say 2 minutes - it would be like having 10 minute subs (as shooting for longer is equivalent to mathematical addition - except for read noise, but you should be already exposing for long enough to swamp read noise anyway). Alternative is to simply do longer subs without math. Integrate with analog device instead of digitally
  23. If you want to exploit "advanced" features of stacking algorithms - you should really keep all the files. Conditions change even during the course of single evening and will certainly be different on a different nights. Simple / naive stacking algorithms can be used with simply creating sum stacks and noting number of subs in such stack. Then total stack will be created by summing sub stacks and then divided with sum of total number of subs (regular average). This approach does not let you discard some subs based on statistics (you can for example discard bad sub on single evening - but what happens if that sub is better than say several subs on some other evening - having all the data can let you set rejection threshold more carefully), nor does it let you use weights per sub depending on their quality (no way to assign global quality on a single evening until all subs from future evenings have also been recorded and examined). You also can't do sigma rejection "en mass" - only on particular evening. Sometimes satellite trail is so faint that you can't form reliable statistics to reject it with only subs on single evening - but total number of subs can help with that.
  24. I'm not sure you are reading the chart correctly? Not only it is visible - it shows some detail. In fact, there is detailed description of how it "feels" to be under certain Bortle sky: https://en.wikipedia.org/wiki/Bortle_scale
  25. I think it is not universal thing. There are multitude of factors that determine final "time to SNR". If you want to do it for yourself, it's best to take one of SNR calculators (where you input things like target brightness, sky brightness, QE of camera, losses in telescope, aperture, focal length, pixel size, etc) which will give you SNR after certain imaging period - or required imaging time for target SNR - and you then compare results when you vary sky brightness. I've once did that and found that moving from SQM18.5 to SQM20.5 yields time reduction of about x6.25 (which corresponds to above table as difference between sqm20 and sqm22 - so maybe it is universal thing?).
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.