Jump to content

vlaiv

Members
  • Posts

    13,265
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. Yes, it is possible. Do look up a few tutorials on youtube of how to do it (lucky planetary imaging). Use ~F/10 (so x2 barlow), and use ROI to achieve good frame rate - you don't really need more than 640x480 for the planet or possibly 800x600 for Jupiter with its moons in one shot.
  2. ADU just means pixel value in some "analog / digital units" - term used for measured pixel values that are not photon or electron count but some number value after gain has been applied (ISO setting) and after A/D conversion has been performed by camera. In this context you should read it as "value in 0-65535 range" that we get when we examine raw image file pixels.
  3. It really depends on what data are you trying to calibrate. I'll assume the following: - you have DSLR - your DSLR has automatic dark current removal. You can test this by taking two darks. One very short (say one second) and another rather long - say 30 seconds or so. Both images need to be true darks. Try to avoid any light leak or even IR leak (infrared can penetrate plastics). On DSLRs, be sure to use viewfinder cover to block any light getting in that way. Best to take subs in very dark room without any light. Once you have your subs - open them in any software that loads raw files and gives you access to raw data and simply measure average ADU value of them. If both subs have same average ADU value - you have automatic dark current removal (otherwise average ADU value of longer sub should be higher as it has more dark current signal). If above is all true - then calibrate as follows: - shoot lights - shoot bias (which are darks at minimum exposure length) - shoot flats - match ISO setting between all three. When shooting flats - avoid clipping. Histogram should show three nice looking peaks at the center or 2/3 to the right. Stack bias to master bias subtract master bias from every flat and every light Stack flats (with bias removed) into master flat Divide each light (with bias removed) with master flat. Ideally, software that you are using should do above for you if you provide it with said files automatically.
  4. Thing with flats is that they work properly when applied to light signal only. You can't have other signals present in the image or master flat in order for it to work. Dust and vignetting reduce amount of light by some percentage and in order to get the right amount of light - you need to divide with that percentage again (first time it multiplies and second time you divide when calibrating and two cancel out). But this whole thing works - only when there is no other signal present, otherwise, it won't work completely - or it will either over or under correct. For example - say that you have 600 units of light and dust shadow only passes 75% so you end up with only 600 * 0.75 = 450 units of light hitting your sensor. But you want to correct that and you record master flat which records 0.75 (this is for purpose of demonstration - it records other values but this is how it essentially works). Now you divide your image with flat and get 450 / 0.75 = 600 All is good, right? But what happens if you have say some dark or bias signal that you have not removed from your image? This means that instead of 450 you actually recorded 470. 450 is light signal and 20 is some other signal - be that dark signal or bias signal, it does not matter. Now when you try to correct with flat - you get 470 / 0.75 = 626.6666 We have brighter image than we should be having - this is over correction by flats There is another case that can happen - maybe you forgot to remove bias signal from your master flat In that case you won't have 0.75 as your master flat but something like 0.77 - 0.75 being light part and 0.02 being bias part. Now if we try to correct we have 450 / 0.77 = 584.41... This value is smaller than 600 - we have under correction. We can even have mix of the two - if you don't remove residual signals from both lights and flats, Just using flats will correct things - but it won't correct fully and how much you'll have issue because of this residual signal - depends on how big that residual signal is compared to light signal and flat signal. If you are using DSLR - you can just use bias as there is dark compensation thing happening in the camera (most modern sensors do this).
  5. No, they don't eliminate the noise - they eliminate dark current signal. Noise remains in the image. They are reusable only if have set point cooling and can reproduce temperature. If you use them, bias files are not necessary as bias signal is contained in dark subs That is pretty much correct - include flat darks for best results (darks that match flat exposure and which you subtract from flats). Flats are reusable if you have permanent setup or have electronic filter wheel with good repeatability or OSC sensor and you don't dismantle your optical train. If you for example pack after each session but leave the scope and camera attached as a single unit - you can reuse flats Calibration files don't remove noise - they remove signal. Bias files remove bias signal and they can be used if: 1. You have modern DSLR that has dark subtraction built in. This will remove dark current without bias so you manually have to remove bias afterwards 2. You plan on using dark scaling / dark optimization when calibrating subs 3. You use very short flat exposures then you can use bias files instead of flat darks I advocate use of larger number of calibration subs. As much as you can shoot without too much inconvenience. Calibration subs don't remove noise - but they do introduce new noise in the image. More calibration subs you have - less new noise you'll introduce into final image.
  6. Depends on several factors. First will be aperture size of both telescopes. Is it the same or does one scope have larger aperture than the other, and if so which one? Second - optical quality of both scopes. Is refractor achromat or apochromat? How fast is maksutov / what is the size of central obstruction? All those will contribute to differences / similarities of double star image between two scopes. If we say - let's take academic case of perfect telescopes with the same aperture, then maksutov will have slightly brighter diffraction rings and slightly less pronounced central airy disk. It will be very small difference visually. Ability to split stars will of course depend on observing conditions / seeing and difference in magnitude between two stars and their separation. In theory, in some edge cases - ideal refractor will have slight edge over maksutov - if there is significant difference between intensities of double star components and stars are separated so that fainter star lands exactly on first diffraction ring of brighter star. In all other cases - you should be equally able to split / not split pair with above two scopes (optically ideal, same aperture size, same viewing conditions).
  7. Use L bracket to mount the scope to the mount? I sometimes use that with my Mak102
  8. Some sort of internal reflection due to quantum nature of light. Can be for example from camera front window. Although it is AR coated (anti reflex) - that just reduces and not completely eliminates reflections. Regular glass will reflect around 4% of light and AR coated brings that down to less than 1% - usually around 0.1% (again, if I'm not mistaken). Very bright star will have many magnitudes higher brightness than say surrounding nebulosity - it can be even 10 mags of difference - which is x10000 more bright. 0.1% of x10000 is still x10 brighter than nebulosity and must show on image. You can verify that it is reflection by the way this bright circle behaves - it will be offset from the center - further star is from the center - more offset (away from center) it will be. This halo is actually image of that star slightly defocused as light that reflected couple of times traveled larger distance and is no longer in focus.
  9. This is quite cool. With a bit more code - it can be turned into "trajectory" in "all sky map" sort of thing (just a projection of measurement points onto circle that represents fish eye lens view of the sky).
  10. This will be slightly off topic, but for anyone interested - Sky Quality Camera appears to be commercial software made by Euromix d.o.o in Slovenia - but I'm unable to find any official way of obtaining that software. I just found a bunch of mentions in academic literature / papers that are written on topic of LP but can't find anything else on that software.
  11. There is a tool called Sky Quality Camera that can measure the whole sky for LP levels. All you need is particular software, DSLR and fisheye lens. I have no idea how to get it, or is it available to general public. You can read an article about it here: https://www.boisestate.edu/physics-cidsrsn/2022/06/27/sky-quality-camera-a-tool-to-track-and-analyze-light-pollution-in-the-cidsr/ There are also some measurements on light pollution info website taken with this method (filter for SQC). Here is measurement from 2019 made just couple of km away from me:
  12. Here is idea of how to reduce amount of data. Not sure how long are exposures, but if you simply calibrate your subs from each evening and split them into groups of consecutive say 4 or 5 subs and simply add those subs - it will be like you took longer exposures. If you image for say 2 minutes - it would be like having 10 minute subs (as shooting for longer is equivalent to mathematical addition - except for read noise, but you should be already exposing for long enough to swamp read noise anyway). Alternative is to simply do longer subs without math. Integrate with analog device instead of digitally
  13. If you want to exploit "advanced" features of stacking algorithms - you should really keep all the files. Conditions change even during the course of single evening and will certainly be different on a different nights. Simple / naive stacking algorithms can be used with simply creating sum stacks and noting number of subs in such stack. Then total stack will be created by summing sub stacks and then divided with sum of total number of subs (regular average). This approach does not let you discard some subs based on statistics (you can for example discard bad sub on single evening - but what happens if that sub is better than say several subs on some other evening - having all the data can let you set rejection threshold more carefully), nor does it let you use weights per sub depending on their quality (no way to assign global quality on a single evening until all subs from future evenings have also been recorded and examined). You also can't do sigma rejection "en mass" - only on particular evening. Sometimes satellite trail is so faint that you can't form reliable statistics to reject it with only subs on single evening - but total number of subs can help with that.
  14. I'm not sure you are reading the chart correctly? Not only it is visible - it shows some detail. In fact, there is detailed description of how it "feels" to be under certain Bortle sky: https://en.wikipedia.org/wiki/Bortle_scale
  15. I think it is not universal thing. There are multitude of factors that determine final "time to SNR". If you want to do it for yourself, it's best to take one of SNR calculators (where you input things like target brightness, sky brightness, QE of camera, losses in telescope, aperture, focal length, pixel size, etc) which will give you SNR after certain imaging period - or required imaging time for target SNR - and you then compare results when you vary sky brightness. I've once did that and found that moving from SQM18.5 to SQM20.5 yields time reduction of about x6.25 (which corresponds to above table as difference between sqm20 and sqm22 - so maybe it is universal thing?).
  16. I'd say that is correct sampling. It is very hard to capture all the detail at short end of scale because of atmosphere - it bends short wavelengths the most and seeing is the worst at 400nm (at blue / violet side of spectrum). For that reason, I advocate aiming for x4 pixel size - in your case that would be F/11.6 - so you are right about there. I would not worry too much about being "slightly undersampled". Gamma at capture and stacking time should be kept at 1.0 - or neutral. Only as a final step of adjustment, after you do sharpening and all - you should get it to 2.2. Data should be kept linear for most of processing workflow (which means gamma 1.0) - especially during capture (so that Youtube tutorial is wrong in using 0.5 for capture).
  17. No, it was genuine question. As far as I know - it is surname, and I have no idea of how it is pronounced (never heard anyone pronounce it). Anyways, I've found it online (with audio) and it is pronounced the way I imagined
  18. Most people don't understand color processing part of workflow. One of important steps is to properly encode color information for display in sRGB color space. Camera produces linear data, while sRGB color space is not linear - it uses gamma of 2.2, and hence gamma of 2.2 needs to be applied to the data during processing. This turns murky colors into nice looking ones: On wavelets - I use linear / Gaussian and actual slider positions will depend on sampling rate and noise levels so you need to play around with those. If you've oversampled considerably - try increasing initial layer to 2
  19. Park position should be position where the scope is "parked" between sessions. It is mostly important for two reasons: 1. having dome or otherwise closing observatory so that telescope can't be left in normal home position (for newtonian telescopes it is better to leave scope with mirror on its side so that dust does not settle down on the mirror). 2. When using periodic error correction and not having encoders. Here - software that controls the telescope assumes that telescope is at its park position when powered on - meaning stepper motors are oriented the certain way and haven't been moved. If steppers have been moved (not to be confused with manually moving the scope when clutches are undone) without software not knowing about it - then synchronization information between software and hardware is lost and periodic error correction will be wrong. Home position is just "start" position for goto moves. It is usually defined as scope pointing to Polaris and counterweights down (that is RA 6h and DEC 90 degrees). Most people don't bother and do not use two different positions - they use the same position for both of these uses - it does not matter if it is usual home or sideways parked. In any case - home should be thought as scope orientation between moves / gotos / trackings and park should be thought of as permanent home between power cycles and is more related to internal gearing and motor positions.
  20. If you want to photograph Moon, and you have DSLR - it is fairly easy - start with single shots of moon. Set your exposure to somewhere in range of 1/500 to 1/200. If atmosphere is stable - you can go even longer to say 1/100 Image will probably be too dark - but you can fix that in post processing by adjusting brightness. Image will look noisy. Here you start encountering noise issue and SNR and next logical step is to: 1. Watch youtube video that explains how to stack multiple exposures in software like RegiStax or AutoStakkert!3 2. Take multiple exposures (say dozen or so) of moon and apply above technique 3. Process resulting image to same brightness and you will notice it has less noise - or better SNR Next step would be to get dedicated astronomy camera that can capture hundreds of images per second (without compression or distortion), a laptop with SSD drive and apply above technique to that data. Also watch youtube videos that explain how to do this with movie recordings from planetary cameras (We call those movies since they are just bunched up frames into single file like a movie). Learn about pixel scale and optimum sampling rate and wavelet sharpening and other sharpening techniques for processing of planetary images.
  21. Yes, reflecting telescope works the same as regular camera lens or telescope with front lens. Details are a bit different - it bends light via reflecting it of curved surface rather than refracting it when passing thru an piece of glass with curved surfaces - but you guessed it - curved surfaces are the key When you attach the camera - there is no magnification. There is "projection". Magnification is the term we use to denote angular magnification - or linear increase in angles of incoming rays (which makes object appear larger to us). With camera sensor - we have projection. We take incoming light rays and project them onto a flat plane. We can talk about "scale" (which can be understood as magnification in some sense) - which determines ratio between angle and distance at focal plane (on sensor surface). To see the difference between scale and magnification - imagine you have an image printed on a piece of paper - and you look at that image from one meter away and from 10 meters away. Object on image will appear bigger (magnified) when image is placed 1 meter away in comparison to 10 meters away - although both images have the same scale. Another example would be to print a copy but scaled down %50 and hold piece both pieces of paper at the same distance - here we will have impression that one image is twice as large as the other - although they are at the same distance - this is what scale does. Effect is almost the same - but causes are different. In any case - scale in photography is determined by two factors - size of sensor (or sensor pixels, depending how you want to look at it) and focal length of lens element (does not matter if it is mirrored system or glass lens system or even combined system like with catadioptric telescopes / camera lenses). To understand where magnification comes from - you need to study ray diagrams like this one: This is example of magnification by telescope (and eyepiece) - we have incoming light rays at some angle theta zero on the left, and we have bigger angle theta e on the output (or exit). Similarly, instead of using eyepiece - we can just look at arrow labeled focal point and its distance from dotted line (optical axis or center of the sensor) - it is directly related to how big theta zero is - larger incoming angle further away this arrow is (or point on sensor) - this is projection part and it depends on F0 or focal length of objective lens.
  22. For astrophotography, you have to "unlearn" all you know from regular photography. Common knowledge in regular photography is full of "shortcuts" and "implied knowledge" - and, while the same principles govern both types of photography - they operate in completely different regimes and hence you can't use shortcuts and implied knowledge - you must have complete understanding - or learn another set of shortcuts / implied knowledge - which is totally different. In astrophotography - we work in light starved regime / photon counting. In regular photography - most of the time we have plenty of light so we don't have to worry about discrete nature of light and signal to noise ratio. Signal to noise ratio is the key in astrophotography and you should think in terms of SNR rather than in terms of exposure. Total exposure in AP is usually measured in hours rather than seconds. We achieve this by stacking single exposures. This is for DSO / long exposure AP. For planetary, we employ yet another technique / another aspect - called Lucky planetary imaging. Here it is also about SNR - but we approach it from a different angle. We have atmosphere to content with and we use extremely short exposures that are governed by how long we can expose for before atmosphere ruins the show (think of motion blur in sports in daytime photography) - and we take tens of thousands of such exposures and stack them to get good SNR (in planetary astrophotography normal exposures are order of 5ms and even less if target is bright enough - like moon or white light solar) Another aspect of how long exposure should be is precision of tracking. In any case, what do you want to image and what equipment do you want to use? If we know that, we can start discussing how to determine good exposure value (is it minutes or milliseconds ).
  23. Interestingly enough, ASI533 is remarkably close to 460ex as far as tech specifications go. It has smaller pixel size (but more pixels). It has overall the same sensor size ~11.25mm x ~11.25mm vs ~12.5mm x ~10mm. Peak QE is very similar at 76% vs ~80% Only problem is pixel size mismatch. Similarly close to 460ex is ASI183 if one uses super pixel mode. Since it has 2.4um pixel size, twice that will be 4.8um which is close to 4.54 of Atik 460. Peak QE is at 84% - so that is also very good. People don't like this camera because of amp glow - but it is small and easily calibrated out. I would not mind using it and would probably pick it before ASI533
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.