Jump to content

vlaiv

Members
  • Posts

    13,265
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. That is rather strange. Background values are tiny percent of peak intensity and once removed - should not impact color of the star much. I guess it depends how background removal is performed and at which stage. When data is linear, removal should not result in such artifact. Let's take properly exposed 12bit camera set to unity gain. This will make most bright stars in the image clip as max signal is only 4096e. If sub is properly exposed, then background signal is such that LP noise is swamping read noise by factor of x5, right (most even use factor of x3)? Let's further say that camera has 2e of read noise. This means that LP noise is 10e and LP signal is around 100e. Let's further suppose crazy notion that all LP is in Red to make maximum color disbalance in the image. So green and blue pixels in saturated star cores will have 4096 - but red pixels will have 3996 after we remove 100e of LP signal. Scaled to 256 range - that is 256 for green and blue and 249.75 or 250 rounded. Only 6 values in 256, and this is while data is still linear and stretched (stretch causes compression in high pixel values, so this difference will be even smaller). Here is above image analyzed in gimp: That is almost 85 out of 256 difference for green - not 6 out of 256 in crazy case where whole LP is one channel and we are using 11bits of dynamic range so many stars clip in the image.
  2. There are only few things that define good planetary camera: - High QE - low read noise - ability to do fast frame rates Other things are not really that important. Only thing that concerns me with ASI462 is lower QE. It is designed to have high QE around 820nm and that means less QE in visible range (green is around 85% of the max QE and max absolute QE is estimated to be around 80% so absolute QE in green is around 80% * 0.85 = 68% - that is not very high). I'd recommend ASI462 (or QHY or some other version) to people that are interested in IR planetary (or IR guiding), but there are probably higher QE cameras in visible part of spectrum than that one. Maybe ASI662 for example (peak QE in green around 91%)? Or if blue QE is important, then maybe ASI678 (peak QE in blue around 83%)?
  3. Best recommendation that I can give is that you don't get one. Look instead for fast lens that is around 100mm. Lenses, in general, are not diffraction limited. Especially fast lens that you'll use at F/2 - F/2.8. Most lenses you are not going to use at F/1.4-F/2 although they are capable of this - because they have really bad aberrations for astrophotography (softness in daytime photography = horrible stars in astrophotography). What does this have to do with getting 100mm vs 50mm? Well, it turns out that level of aberration of the lens is tied to linear units - all lenses define their sharpness with MFT diagrams that are in lines per mm - not arc seconds on the sky. Further more - conversion between arc seconds and linear units depends solely on focal length of the system (we often call that "zoom" or "magnification"). Aberrations that are not in nature due to diffraction of the light but rather due to construction of the lens are going to be smaller on longer FL lens when converted into arc seconds. Since you need 50mm FOV, here is what I'm actually proposing. Get 100mm lens, shoot 2x2 mosaic and bin your data x2. Result will be the same as using 50mm lens - both in FOV and in speed, but sharpness of the image will be greater with 100mm lens.
  4. I like this rendition the best, but its not without its flaws. For some reason, people using PI seem to have strangely clipped stars in their images. I think it is feature of some workflow everyone seems to be using. Above is example. Then there is that "plastic wrapper background" effect which stems from overuse of denoising: If you see your faintest stars starting to "polymerize" and form chains like melted plastic - you've overdone it.
  5. Currently it is certainly viable to do what Olly proposed. Make a few stacks - one with best subs, one with medium number of subs and one with all subs. Then combine them either using layers or with pixel math. You can use either SNR map or pixel intensity for combination. SNR map is simply signal image / noise image ratio where signal is obviously stack and noise image is stack but using standard deviation as stacking method rather than average. These two can be used to do combination at linear stage. For example - if intensity > some value use good image, else use full stack Similarly you can use SNR map in same way. Alternative of course is to use layers in standard way.
  6. Oh, yes! I'm actually struggling to keep the flood of ideas at bay, keeping sort of backlog of things I need to print
  7. Not sure if there is any difference. You can either take cylinder and helix sweep and subtract both or you can subtract helix and then add helix sweep - difference is just the profile that is used.
  8. It is about how you stack that data. You really want two things - first is advanced algorithm that will stack based on actual SNR not "estimated weights" Second is algorithm that will equalize FWHM across subs with use of deconvolution. This will take subs with high FWHM (poor seeing) and improve sharpness at expense of SNR. Former algorithm will deal with SNR deterioration.
  9. I think several things - In order to utilize all the data, you need to work with appropriate algorithms. Just throwing data all together is not proper way to go about it - Having two different images to compare can't exclude different processing parameters used. Top image is clearly pushed much harder and color management is over the top (saturated beyond repair ). Only thing that should be compared between two stacks is: - resulting FWHM - any gradients that can't be removed (nonlinear gradients at linear stage) - achieved SNR There is even tradeoff between first and last point here. If you have good enough SNR, you can afford to sharpen data more and can end up with sharper image than if you only keep "sharp" subs but don't achieve good enough SNR to be able to sharpen image.
  10. DSO / long exposure imaging is different. There are techniques that allow you to utilize all data gathered, even if that data is suffering from more LP, has worse SNR and worse FWHM
  11. Might try to debayer your data to get even better result. In any case - lucky imaging is trade off - you want to use only the best subs when seeing is effectively frozen and distortion is minimal - but you want to stack as many subs as possible to get good SNR that will help with sharpening.
  12. That would be interesting to see. I haven't written any macros in FreeCad yet, but I think it can't be that hard if it uses python or similar.
  13. Does not need to be zero mag. You can take any reference star near by (so that extinction and LP levels are similar) that you know magnitude of. General rule - magnitudes subtract while fluxes divide So ratio of fluxes = difference of magnitudes
  14. I've already settled on type of reduction for this star tracker. It will be either split ring planetary gear type or cycloidal drive. These can both be made very compact. Worm gear above is meant only for fine/precise alt azimuth adjustment of polar wedge for star tracker. Actual drives that will move the tracker will be one of these types: https://www.youtube.com/watch?v=aTiBB2n-pNQ or https://www.youtube.com/watch?v=-VtbSvVxaFA Reduction needed is order of 200:1 This can be easily calculated depending on needed arcsecond per micro step precision. I went with something like ~1"/micro step for star tracker (AZGti has 0.6"/micro step for example). 360 x 60 x 60 = 1296000 arc seconds in single revolution 200 steps per revolution for 1.8 degree stepper, and if we say use 32 micro stepping we get 6400 micro steps per revolution. 1296000/6400 = 202.5 So we need exactly 202.5:1 if we want to get 1"/micro step 200:1 is easily achieved in two step reduction - 1:10 x 1:20 for example, or even in single stage for split ring compound planetary gear thingy. For full tracker (AzGTI type device) - I'd go with cycloidal drive + 1:7 belt as final stage to offset motor shaft from output shaft, so cycloidal drive would need to be something like 1:30 which is easily achieved.
  15. Assuming that you have background removed and you have pure flux of target star then magnitude of that star is equal to -2.5 * log (F/F0) where log is logarithm base 10 and F0 is flux of zero magnitude star using the same instrument In aperture photometry - there are 3 rings that are used to determine needed quantities. First / most inner ring is zone where star light is concentrated. This is how flux is measured - simply sum of all values of pixels inside that inner ring. Then there are second and third ring. All pixels that fall between these two rings are considered background pixels and average value is calculated for these pixels. This is background value. When you want to calculate pure flux for star, you do following: You sum all pixel values in inner ring and then subtract mean background value (calculated from pixel values between other two rings) times number of pixels that are inside inner ring. This gives you "pure" flux value. I don't really know what columns astrometry.net returns, but you can do aperture photometry with AstroImageJ as well and it will give you values like I described them Different columns represent different values (Column source - sky for example is source brightness - sky brightness. N_Src_pixel is number of pixels used for source, N_Sky_pixels is number of sky pixels, etc ...)
  16. Hi Louise I sorted moisture with one of those Ikea samla cases with silica beads inside and little hygrometer to tell me relative humidity (it sits at 10% which I believe is minimum for device). There I store filament when not in use. I also used food dehydrator initially without any modifications for few hours and it really helped to get things started in right direction. In fact that set me off to whole new direction with my 3d printing / modding. I'm in the middle of Ikea lack enclosure build. I added banana pi for running Klipper firmware (made mistake of going with Banana pi M4 - which is not well supported as far as OSes go - so I ended up using Ubuntu 16.04 and upgrading it to 20.04 just to be able to install all that is needed for klipper + fluidd). Right now I'm finishing electronics enclosure for Banana PI + step down voltage converter and will move on to print electronics enclosure for small MeanWell PSU that will power BPI, air filter fan and LED strip for lighting above is banana pi enclosure with mounting system being ready to install on outside of enclosure For that reason, star tracker project is a bit on a hold, but will continue soon. Not sure I want to use any sort of metal gears if they are not very cheap and easily accessible everywhere. Aim of the project is to be able to print star tracker almost anywhere in the world (like no need to import things - only purchase locally in hardware store). As far as FreeCad goes - I find it very useful, or rather - I leaned to use it properly after all this time and now I don't even think about it. Sure, there are always some new things to be learned - but it is quite powerful tool. Just few days ago I leaned how easy it is to create technical drawings from 3d designed objects: Piece of cake really. As far as threads go - I do my own threads and it is easy in FreeCad, but maybe a bit involved as you need to draw your own profile. I simply draw profile of single tooth - for either nut or bolt and then do helix revolution of it - either additive if bolt or subtractive if nut. Metric threads are really easy to make - 60 triangle is base. I use this page whenever in doubt: https://en.wikipedia.org/wiki/ISO_metric_screw_thread I don't like tool that is offered for threads in FreeCad as it uses some sort of Bezier curves and threads end up not looking good.
  17. It certainly is an improvement. Tiff is also processed version. Maybe it would be better to include linear stacked data rather than processed.
  18. Bias goes instead of darks - rest is the same, so yes take flats. Proper calibration requires darks to be taken. Do this every time when working with dedicated astronomy cameras, but DSLRs are a bit different. They have "automatic dark removal" (sort of). There are dozen or so pixels in each row that are covered and don't get exposed with the rest of pixels - but they do participate in exposure and accumulate thermal signal. These are then used to calculate dark current that is removed from other pixels that were actually exposed to light. This process removes dark current - but what can't be removed like that is bias signal (signal of reading out pixels and analog / digital conversion units). You would normally remove that in regular calibration as darks consist out of both read out signal and dark current signal, but here since dark signal is automatically subtracted - we are only left with bias signal to be removed - that is why we use bias instead of darks with modern DSLRs.
  19. Don't need CLS filter then. CLS filter is a bit aggressive. It works better for emission type nebulae (but not as good as UHC) - but it hurts galaxies and other broad band targets (and reflection nebulae). In Bortle 4 - you probably don't need filtering at all.
  20. That is one way of looking at it Let's see if there is something that we can do with the things as they are. What sort of LPS filter did you use, and what is your LP like (ideally approximate SQM or at least Bortle). Not all LP filters are suitable for every target and LP filter don't help always. If you by any chance used UHC filter on broadband target like M31 - then first order of business is to ditch LP filter and shoot target again. If you shoot from Bortle 5 or less - you probably don't need filter if it is CLS type. If you are in Bortle 4 or less - you probably don't need special filter like Hutech IDAS P2. You can shoot bias now. Bias don't depend on temperature (or at least it should not) nor focus position, nor light from telescope so you can shoot it any time. It contains read signature of the camera that needs to be removed (bias signal). You take it by selecting same ISO as you used when taking lights - but instead of lens - put that plastic cover on. Remember to cover finder window with that rubber cap and set exposure to minimum supported. Record about 30-40 such subs. Stack your data again in Siril That might make it a bit better than it is now. Rest is down to filter used and your LP levels - so that can be tried on your next night out.
  21. You probably don't need darks at all for DSLR, so use bias only and shoot a bit more than just 5 of them This camera has very low red response (hence very green / teal appearance without proper color calibration), and when color is managed - then a lot of noise in red pops up. Try stacking with Siril, ditch darks and use bias instead (but more than 5 - like 30-40 of them)
  22. North of lake Tanganyika in Burundi, there is little village called Ninga. There they teach this special math needed to determine back focus distance for flatteners. People of TS went there to study, but I was not fortunate enough to get the chance to spend some time in village Ninga and learn this sacred math of backfocus calculations
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.