Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

vlaiv

Members
  • Posts

    13,030
  • Joined

  • Last visited

  • Days Won

    11

Everything posted by vlaiv

  1. I'd say that is correct sampling. It is very hard to capture all the detail at short end of scale because of atmosphere - it bends short wavelengths the most and seeing is the worst at 400nm (at blue / violet side of spectrum). For that reason, I advocate aiming for x4 pixel size - in your case that would be F/11.6 - so you are right about there. I would not worry too much about being "slightly undersampled". Gamma at capture and stacking time should be kept at 1.0 - or neutral. Only as a final step of adjustment, after you do sharpening and all - you should get it to 2.2. Data should be kept linear for most of processing workflow (which means gamma 1.0) - especially during capture (so that Youtube tutorial is wrong in using 0.5 for capture).
  2. No, it was genuine question. As far as I know - it is surname, and I have no idea of how it is pronounced (never heard anyone pronounce it). Anyways, I've found it online (with audio) and it is pronounced the way I imagined
  3. Most people don't understand color processing part of workflow. One of important steps is to properly encode color information for display in sRGB color space. Camera produces linear data, while sRGB color space is not linear - it uses gamma of 2.2, and hence gamma of 2.2 needs to be applied to the data during processing. This turns murky colors into nice looking ones: On wavelets - I use linear / Gaussian and actual slider positions will depend on sampling rate and noise levels so you need to play around with those. If you've oversampled considerably - try increasing initial layer to 2
  4. Park position should be position where the scope is "parked" between sessions. It is mostly important for two reasons: 1. having dome or otherwise closing observatory so that telescope can't be left in normal home position (for newtonian telescopes it is better to leave scope with mirror on its side so that dust does not settle down on the mirror). 2. When using periodic error correction and not having encoders. Here - software that controls the telescope assumes that telescope is at its park position when powered on - meaning stepper motors are oriented the certain way and haven't been moved. If steppers have been moved (not to be confused with manually moving the scope when clutches are undone) without software not knowing about it - then synchronization information between software and hardware is lost and periodic error correction will be wrong. Home position is just "start" position for goto moves. It is usually defined as scope pointing to Polaris and counterweights down (that is RA 6h and DEC 90 degrees). Most people don't bother and do not use two different positions - they use the same position for both of these uses - it does not matter if it is usual home or sideways parked. In any case - home should be thought as scope orientation between moves / gotos / trackings and park should be thought of as permanent home between power cycles and is more related to internal gearing and motor positions.
  5. If you want to photograph Moon, and you have DSLR - it is fairly easy - start with single shots of moon. Set your exposure to somewhere in range of 1/500 to 1/200. If atmosphere is stable - you can go even longer to say 1/100 Image will probably be too dark - but you can fix that in post processing by adjusting brightness. Image will look noisy. Here you start encountering noise issue and SNR and next logical step is to: 1. Watch youtube video that explains how to stack multiple exposures in software like RegiStax or AutoStakkert!3 2. Take multiple exposures (say dozen or so) of moon and apply above technique 3. Process resulting image to same brightness and you will notice it has less noise - or better SNR Next step would be to get dedicated astronomy camera that can capture hundreds of images per second (without compression or distortion), a laptop with SSD drive and apply above technique to that data. Also watch youtube videos that explain how to do this with movie recordings from planetary cameras (We call those movies since they are just bunched up frames into single file like a movie). Learn about pixel scale and optimum sampling rate and wavelet sharpening and other sharpening techniques for processing of planetary images.
  6. Yes, reflecting telescope works the same as regular camera lens or telescope with front lens. Details are a bit different - it bends light via reflecting it of curved surface rather than refracting it when passing thru an piece of glass with curved surfaces - but you guessed it - curved surfaces are the key When you attach the camera - there is no magnification. There is "projection". Magnification is the term we use to denote angular magnification - or linear increase in angles of incoming rays (which makes object appear larger to us). With camera sensor - we have projection. We take incoming light rays and project them onto a flat plane. We can talk about "scale" (which can be understood as magnification in some sense) - which determines ratio between angle and distance at focal plane (on sensor surface). To see the difference between scale and magnification - imagine you have an image printed on a piece of paper - and you look at that image from one meter away and from 10 meters away. Object on image will appear bigger (magnified) when image is placed 1 meter away in comparison to 10 meters away - although both images have the same scale. Another example would be to print a copy but scaled down %50 and hold piece both pieces of paper at the same distance - here we will have impression that one image is twice as large as the other - although they are at the same distance - this is what scale does. Effect is almost the same - but causes are different. In any case - scale in photography is determined by two factors - size of sensor (or sensor pixels, depending how you want to look at it) and focal length of lens element (does not matter if it is mirrored system or glass lens system or even combined system like with catadioptric telescopes / camera lenses). To understand where magnification comes from - you need to study ray diagrams like this one: This is example of magnification by telescope (and eyepiece) - we have incoming light rays at some angle theta zero on the left, and we have bigger angle theta e on the output (or exit). Similarly, instead of using eyepiece - we can just look at arrow labeled focal point and its distance from dotted line (optical axis or center of the sensor) - it is directly related to how big theta zero is - larger incoming angle further away this arrow is (or point on sensor) - this is projection part and it depends on F0 or focal length of objective lens.
  7. For astrophotography, you have to "unlearn" all you know from regular photography. Common knowledge in regular photography is full of "shortcuts" and "implied knowledge" - and, while the same principles govern both types of photography - they operate in completely different regimes and hence you can't use shortcuts and implied knowledge - you must have complete understanding - or learn another set of shortcuts / implied knowledge - which is totally different. In astrophotography - we work in light starved regime / photon counting. In regular photography - most of the time we have plenty of light so we don't have to worry about discrete nature of light and signal to noise ratio. Signal to noise ratio is the key in astrophotography and you should think in terms of SNR rather than in terms of exposure. Total exposure in AP is usually measured in hours rather than seconds. We achieve this by stacking single exposures. This is for DSO / long exposure AP. For planetary, we employ yet another technique / another aspect - called Lucky planetary imaging. Here it is also about SNR - but we approach it from a different angle. We have atmosphere to content with and we use extremely short exposures that are governed by how long we can expose for before atmosphere ruins the show (think of motion blur in sports in daytime photography) - and we take tens of thousands of such exposures and stack them to get good SNR (in planetary astrophotography normal exposures are order of 5ms and even less if target is bright enough - like moon or white light solar) Another aspect of how long exposure should be is precision of tracking. In any case, what do you want to image and what equipment do you want to use? If we know that, we can start discussing how to determine good exposure value (is it minutes or milliseconds ).
  8. Interestingly enough, ASI533 is remarkably close to 460ex as far as tech specifications go. It has smaller pixel size (but more pixels). It has overall the same sensor size ~11.25mm x ~11.25mm vs ~12.5mm x ~10mm. Peak QE is very similar at 76% vs ~80% Only problem is pixel size mismatch. Similarly close to 460ex is ASI183 if one uses super pixel mode. Since it has 2.4um pixel size, twice that will be 4.8um which is close to 4.54 of Atik 460. Peak QE is at 84% - so that is also very good. People don't like this camera because of amp glow - but it is small and easily calibrated out. I would not mind using it and would probably pick it before ASI533
  9. I also like the looks of that scope. I haven't used or seen one in person, but I think it is a good scope. Sure, at F/6, being ED doublet - it will have some chromatic aberration, but as far as I can tell - it will be less than say 4" F/10 achromat. That second scope I've used and in fact I own one, and I can say - it is a lovely instrument and amount of CA is far less objectionable than I thought it will be. Since I owned F/5 4" achromat at one point - I do have idea of how bad CA can actually be - and F/10 version really has little. Any ED scope with less CA than that is very capable instrument.
  10. I think we will need more information of what you consider light weight. If I'm considering light weight setup - I'm with Olly - I'd get something in 4" class. Maybe something shorter - like this scope: https://www.firstlightoptics.com/stellamira-telescopes/stellamira-110mm-ed-f6-refractor-telescope.html It will give you some residual chromatic aberration - but it will be good all around scope - showing you low and high magnification views well. Then, I would suggest mounting it on a good mount with slow motion controls - my choice would be this: https://www.firstlightoptics.com/alt-azimuth-astronomy-mounts/skywatcher-skytee-2-alt-azimuth-mount.html Together with tripod, this combination weighs total of about 16Kg if I'm not mistaken. You also need about a meter and 20 in length of space to store it in your car. 6" Dob telescope weighs about the same at 16-17Kg, requires about the same length of space being 1200mm of focal length (So tube is approximately that long as well, only difference is that you need some more space for base / rocker box since it is more cumbersome then folded tripod. I've transported my 8" F/6 dob in different size cars without problem, so dob telescopes fit in cars. If you need something lighter weight than above - then you are looking at perhaps 80mm of aperture, but again, you won't save much on weight as mounts tend to be heavy for stability. 80mm F/7 ED doublet has only like 3Kg and mount that can handle it - say this one: https://www.firstlightoptics.com/alt-azimuth-astronomy-mounts/sightron-japan-alt-azimuth-mount.html Has only about 1.5Kg - so that is 4.5 total. Now, depending on the tripod you decide to use - you can either end up with 5Kg steel tripod - which will bring total weight to ~10Kg, or you can perhaps get very good / stable carbon fiber tripod. That option can help save storage / transport room as such tripods tend to collapse to smaller size. For example - this is only 2.5Kg: https://www.firstlightoptics.com/tripods/stellalyra-carbon-fibre-tripod-with-38-thread.html That brings total to 7Kg - ok, now that is really light weight setup.
  11. Depends on the mount and the way you guide. Amount of "preload" or imbalance will depend on how aggressive is your guiding in terms of guide rate. In any case - it should be slight - enough for gravity to keep the gears engaged when changing guide direction. Not needed on mounts with minimal backlash or pure belt reduction system of friction drive or direct drive. All those like as close balance to perfect as possible.
  12. No problem what so ever. I did not really test the clamp, but I did find a slight issue with it - that might or might not be issue in real use. It actually has two "issues" - none of them is real issue until it proves to be issue in use. First is clearance of the clamping side. As designed it sits flush with rest of the clamp, and I'm afraid if I bolt down the thing to a mount - it won't move freely to perform its clamping action. There should really be at least some designed clearance for it to move freely. Second issue is the fact that I used springs around securing nuts instead around linear rods that act as guides: Both springs and nut are right handed and when this is the case - they can sometimes "mesh" (spring gets caught on thread if thread is coarse enough and spring is fine enough). Ideally, you want different handedness of bolt and spring to avoid this or just put spring around smooth part and not threaded one. Other than that, as was pointed out in this thread - this is best printed out of some other material - Namely ABS or ASA. I've since started working with other materials and I'll probably reprint this in ASA at some point. Only drawback that I've found so far with this PLA - is creep. Other than that, it is very mechanically sound and does not deteriorate. There is small amount of creep and I had to re - tighten bolts that hold the thing together after a few months - nothing major, like quarter of the turn - but it shows that creep is there. I would not leave scope for prolonged time sitting on the plate - but during single session - I think it will be just fine. Just take scope of the mount after the session to avoid prolonged loads on such piece. ASA / ABS don't have this issue (nor PC for that matter, but it is much harder to print - maybe only Prusa PC blend could be printed on my machine at the moment). As far as mount goes - I managed to acquire all parts necessary. Bearings, aluminum tubing, large PTFE washers that will be used for friction adjustment, some springs and so on, and I've started design work. Here is where I am at the moment: Horizontal arm of T-mount: Left side is where counterweight rod will screw in (I already designed M6 blind rivet nut place to accept M6 threaded rod) and right side is where clamp will be bolted down: All the bearings are in assembly there and I'm just missing friction clutch part of mechanism there and attachment to vertical piece. Similarly, I did vertical column of T-mount: It again has 3/8 UNC (for photo tripod) blind rivet nut on the bottom side and all the bearings. I just need top piece that will hold horizontal arm (T joint) and friction clamp. Not sure when I'll find time to finish design and start building as I started some work on printer itself. I did Y axis linear rails mod and now want to fit cable drag chains for cable management and waiting some parts from AliExpress to do Core XZ conversion on my Ender3 (fun part is that I'm designing and printing all bits myself :D).
  13. https://github.com/indigo-astronomy/indigo/tree/master/indigo_drivers/agent_alpaca
  14. INDIGO has alpaca agent - which is very handy. You select what device on your setup you want to expose over to ASCOM / alpaca and it exposes it. No need for special driver - indi driver (or rather indigo one) does the job. Alpaca functionality is somewhat limited because of some light mismatch between ASCOM and INDI architectures - but overall it works.
  15. Under linux, there is already free solution available - called USB over IP. I think that there are still no reliable Windows implementation for the other side, but one can use some hacks like running linux + virtual box and windows in virtual machine with usb pass thru and so on. There is this repo as well: https://github.com/cezanne/usbip-win Btw, ASCOM aplaca deals with this nicely without need for USB connection - but suffers from same issues as USBIP would. One really needs fast connection for seamless work. I did some tests yesterday and got very poor throughput on powerline adapters. Although they advertise as 500Mbps, they actually only have 100Mbps ports (not gigabit) so RPI works only in 100Mbps mode. In "lab" conditions, I'm able to get 94Mbps with iperf, but as soon as I include outdoor conditions and 30 meters of extension line and attempt connection over that - it drops to ~35Mbps. That will surely be better and more reliable than Wifi connection, but it is slow. That is about 4 megabytes per second of transfer speed - and I'm using camera that has 3000x2000 - with 16bit - single sub has almost 12MB of data - which means sub download of 3 seconds at best. It was actually more like 5-6 between exposures. Rather slow for someone used to USB3.0 speeds (4.8 Gbps - so even around x5 that of gigabit ethernet). For now, I'll just explore options to see what software works the best and then I'll probably switch to Cat6 cable and direct ethernet connection rather than using powerline adapters. - I might even consider more powerful machine and use network as Remote desktop connection - Explore further INDIGO agents and associated ecosystem of apps. It turns out that agents move some of the processing from client back to server (like exposure control, guiding, plate solving and such) and network is used just for control of agents - so basically UI that connects to agents. I'm not overly confident that it will work without issues, but am willing to give it a go.
  16. I'm trying to avoid having heavy software on the RPI. It will just serve as "interface" to equipment and network and all the apps will be run on my desktop computer. If KStars can run on windows (and it looks like it can) - then it would be alternative. I have RPI running INDI effectively (it is actually INDIGO - different implementation of the same specification with some changes) - so connecting to it with software run on Windows is what I'm after. As far as I can tell - I can use any set of software I'm used on windows if I use ascom alpaca which is "middle man" between ascom and indi and allows for indi devices to be presented to windows software as ascom devices.
  17. It's just implementation of INDI server. Can Nina natively connect to INDI server?
  18. I'll briefly describe my wide field setup and how I envisioned it working. I have AZGti mount converted to EQ mode. ASI178mcc on top of it with Samyang 85mm F/1.4 lens (actually T1.5 version) and 30mm guide scope with ASI185 in side by side arrangement. All of that is connected to RPI4 running INDIGO server. It's connected with power over ethernet adapter to my home wired network and I can access all of that from my work computer. Everything is wired connection for stability (USB/serial adapter for AZGti and powerline ethernet adapter for network). I'm now wondering what to use for imaging stack: - imaging application - guiding application - planetarium / scope pointing app Or rather - I'm wondering if I should go for native INDI connection or Ascom alpaca. I'm inclined to go with: Nina, Phd2 and Stellarium for above, but not sure which is the best way to connect software to indigo server. I've tried ASCOM alpaca connection from SharpCap and it works. I'm getting exposures and can control the camera sufficiently. I can also control all the gear via indigo web control panel if need be. Any suggestions of what I should try first?
  19. It would be nice if someone would come along that has actually used that scope for imaging. I've read reports that it is excellent visually, but faster ED doublets don't control color as good when imaging. For example, I was surprised to see level of residual chromatic aberration with 4" F/7 ED doublet with FPL-53 glass. I was expecting something like that from cheaper FPL-51 glass ED doublet but not from FPL-53. Now, granted, this scope is slower at F/7.8 - but it also has larger aperture (significantly so). For comparison - Skywatcher ED100 that is often used for imaging has F/9 and is almost color free. ED80 is not quite there although it is 80mm and F/7.5. And this is 125mm F/7.8. Closest thing to it is SkyWatcher 120mm F/7.5 and I know that some people use it for imaging as well - but I have no idea how well it performs with respect to chromatic aberration. There is a cure for that - in form of L3 Astronomik luminance filter, but I'd rather use sharp triplet than resort to such hacks on doublet (although I would not mind using doublet + filter if similar triplet was unavailable for some reason or too expensive for me).
  20. Well, if you want to get different FOV - just crop 294 instead of spending money on 462 to have to sell it later
  21. How will that help with "more reach" requested (i.e. longer FL and more aperture)?
  22. Here are my thoughts on this - mind you, I don't have first hand experience with said mount and scopes of that class on it. You want a triplet rather than doublet for serious imaging. You want as much aperture you can mount. This really limits you to below 130mm refractors as scopes in that class fast approach over 10Kg in weight. Here are two contenders that I managed to quickly track down: https://www.teleskop-express.de/shop/product_info.php/info/p10181_Explore-Scientific-ED-Apo-127-mm---FCD-100--carbon-tube--Hexafoc.html 127mm triplet with ~7Kg of weight https://www.teleskop-express.de/shop/product_info.php/info/p3041_TS-Optics-PHOTOLINE-115-mm-f-7-Triplet-Apo---2-5--RAP-focuser.html This scope comes under many labels, and I'm sure there is AltairAstro version as well that could be much easier to source locally. 115mm triplet with ~6.5Kg of weight
  23. With guide scope, I'm guessing that it probably is less important than it used to be, but with OAG - it might be different story. Don't know how much guide stars one can pick up with OAG as FOV is much smaller and also, I'm not sure how much seeing is varied across such a small FOV. From planetary imaging and adaptive optics systems - we know that isoplanatic angle is not that large - so seeing disturbance seems to be the same over say 20ish arc seconds if I remember correctly. That is still much smaller than FOV of OAG even at very long focal length with small sensor (say you have ASI120 which has 1280 x 1024 and you use 2 meters of focal length so you end up at 0.4"/px - you still have more than 500 arc seconds across the sensor). However, I have no idea how much tilt component of wavefront error alone changes with angular distance. I'm inclined to think that selection of stars on small FOV such as one gets with OAG won't have enough of diversity in star position to average them out, but I could be wrong. In any case - more exposure, more average star position approaches true star position. There is also "SNR approach" to guiding. I've read once in some document a very sensible argument (and I tent to agree with it) that we could observe guiding in similar way to imaging - by examining signal to noise ratio. In this instance signal is actual mount error and respective correction and noise is well - noise, inaccuracy in both determining star position, but also in issued correction as no mount is perfect and will not respond accurately to guide pulse - there will always be some backlash, some inertia to overcome, some oscillations due to weight on the mount being moved and its inertia and so on ... Conclusion of the paper is that we increase SNR in part by reducing number of corrections issued and that corresponds to long guide cycle. As long as mount does not accumulate significant error in that time (smooth mount) - long guide cycle is better than short because of this, even if we don't chase the seeing.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.