Recently Browsing 0 members
No registered users viewing this page.
I made an acquisition and processing tutorial a while back (3 years ago? Yikes!) and it is fairly dated in terms of what I'm doing these days. I've been asked for a long time to make a new one showing what I'm doing these days. Specifically how I'm processing a single shot image for both the surface and prominences and how to process them together to show prominences and the surface at once. I've abandoned doing split images and composites and strictly work from one image using layers. Acquisition does not use gamma at all anymore. Nothing terribly fancy, but it's not exactly intuitive so hopefully this new video will illustrate most of the fundamentals to get you started. Instead of an hour, this time it's only 18 minutes. It's real time from start to finish. I'm sorry for the long "waiting periods" where I'm just waiting for the software to finish its routine, it lasts 1.5 minutes and 30 seconds tops typically at first. The first 4 minutes is literally just stacking & alignment in AS!3. I typically will go faster than this, but wanted to slow down enough to try to talk through what I'm doing as I do it. Hopefully you can see each action on the screen. I may have made a few mistakes or said a few incorrect things or terms, forgive me for that, this is not my day job. I really hope it helps folk get more into processing as its not difficult or intimidating when you see a simple process with only a few things that are used. The key is good data to begin with and a good exposure value. Today's data came from a 100mm F10 achromatic refractor and an ASI290MM camera with an HA filter. I used FireCapture to acquire the data with a defocused flat frame. No gamma is used. I target anywhere from 65% to 72% histogram fill. That's it! The processing is fast and simple. I have a few presets that I use, but they are all defaults in Photoshop. A lot of the numbers I use for parameters are based on image scale, so keep that in mind, experiment with your own values. The only preset I use that is not a default is my coloring scheme. I color with levels in Photoshop, and my values are Red: 1.6, Green 0.8, Blue 0.2 (these are mid-point values).
Processing Tutorial Video (18 minutes):
RAW (.TIF) files available here to practice on (the same images you will see below as RAW TIFs):
Video for Acquisition, Focus, Flat Calibration and Exposure (20 minutes):
(Please let me know if any links do not work)
Results from today using this work flow method.
SSM data (sampled during 1.5~2 arc-second seeing conditions):
Equipment for today:
100mm F10 Frac (Omni XLT 120mm F8.3 masked to 4")
Baader Red CCD-IR Block Filter (ERF)
PST etalon + BF10mm
SSM (for fun, no automation)
this is my first post at SGL, and it will be quite long. I am not native a English speaker, so please excuse any mistake.
I have quite some plan with my telescope mount and its goto control, and I am looking for some feedback and comments. If somebody else did a similar project, please let me know. And please feel free and encouraged to make suggestions, ideas, critics, etc.
The story in a few buzzwords: Raspberry Pi Zero → direct control of TMC2209 stepper drivers via the Pi's Uart serial interface to drive my telescope mount. I am writing a software (optionally: open source?) to control the mount. The language will NOT be C, as typically used for Microcontrollers (I know for instance OneStep)
I am using Kotlin, which is a more advanced JVM language.
I think this should be enough information to filter the readers who are interested in reading the rest of my post.
Now the long and detailed story:
My professional background: I am a physicist, and did a PhD in EE (Power Electronics). Later, I became software engineer. Besides being fascinated by Astronomy, I am a tinkerer (Reprap 3D printer, electronics, …). I did grind my first mirror (a 6'' Schiefspiegler) when I was 15 years old, and I built the cookbook CCD cameras in the 90's.
After many years without a telescope (study time, relationship, ... ), I settled down with my family, and I started to get back to Astronomy.
Recently, I did by a quite a massive second hand mount: the “Vixen New Atlux” from another other stargazer in Switzerland. My opinion is that the New Atlux' mechanical design is superb. It has (had...) internal wiring, the counterweight bar can be hidden in the mount for transport, good polar alignment screws, it has an excellent polar finder with a dimmable LED.
But on the other hand the electronics: two weak servo motors in combination with the incredible Starbook 5.... Seigh... the starbook...(!) it is, well... the mount is just superb, and no more comments about the starbook game boy, which shall rest in peace at the garbage dump.
I removed the servos and all electronics, and I put 2 stepper motors into the mount, which are coupled to the gear with a timed belt. My original plan was to put an Arduino into the mount in order to control the steppers. I have an old goto Celestron cg-5 with Starsense, and it would have been quite easy to mimic - with the Arduino as interface – the servos of the old cg-5 and translate the Starsense control signals to my New Atlux. I can write C, and there is even an open source project called OneStep, which uses a Microcontroller in a similar way as I do.
But I don't like to write C code anymore. In the 3D printer community, people need to use real time electronics to control the printer steppers. Due to the real time requirement, C with a real time microcontroler (Arduino & similar) are the only option for 3D printers.
Do we need real time for our telescope? No. We don't need to control a lot of Motor accelerations and high speed control. For the telescope, we need to set the Motors speed precisely, and we need to drive to any position in an accurate and controllable and slow way.
Then, there are new stepper motor drivers available with as much as 256 microsteps. The TMC2209 stepper driver , which is very well know in the 3D printer community, is not vibrating at all. It runs just smoothly, also at very low speeds. I do drive my motor with 0.25 rpm (sideral speed). In case of a slew, I can accelerate to 1500x sideral speed, which also would allow me easily to track the ISS. Wonderful.
The current status of my project is:
The mount is equipped with the two new motors The TMC2209 drivers are connected to the Raspberry pi GPIO Interface, and I can control them via Software. Theoretically, I could attach up to 4 motors with a single Uart interface (1 wire protocol). For instance, a focuser or a filter wheel could be attached. I selected Kotlin as language. Java also would have been possible, but I think for a new project, Kotlin will lead to a much more readable code. The TMC drivers can be driven via a chip-internal clock signal. Different to what the 3D printer community is doing (they use the step / dir pins, and create every single microstep with the microcontoller), I can send a “speed” signal from the Raspi via UART to the 2209 chip, and it will execute this speed for me without any further action. The only time critical issue was that I need to precisely count the steps that the 2209 stepper drivers executed. This is done via a GPIO pin, receiving its index signal (a pulse for every 2209 fullstep). Here comes the pain with Linux (non real-time) and the Pi: For user programs, it is impossible to guarantee that every pulse from the stepper drivers will be registered. But I cannot afford to have a step count drift over time. The solution was that I wrote a Linux kernel module in C. I wrote that I don't want to write any C code. Well, a few lines for the kernel module were indeed necessary. I can live with that, having in mind that the rest will be written in Kotlin. The only task of the Kernel module is to count every registered step at the Pi's GPIO input pins. This kernel module output is then mapped to a character device file in /dev/ for every stepper. In Kernel space, it is possible to register and count interrupts without missing even any one of them. From a hardware point of view, this is indeed everything we I need. The project cost so far: 2x10€ for the stepper drivers, 2x10€ for the motors, 2x20€ for the tooth belts and pulley, 10€ Pi Zero plus some peripheral expenses: Micro SD card, USB charger, and 1200 € for the used Vixen new Atlux mount. And a lot of time.
I have so many ideas on how to extend the ecosystem of my software, but these ideas are for the longer term (maybe years from now on):
Multi-star alignment. The alignment should be able to be updated continuously during an observation night. With a set of stars, it should be possible to calculate the quality of the aligment points, and e.g. drop them if they are errorneous. PEC correction (should be easy on the Pi) End-Stop support The polar alignment routines of today's goto scopes are quite good. But what I would like to have is some audio-feedback when I move the alignment screws into the right direction. Possibility to pre-plan an observation night (e.g. the mount could tell you that the Jupiter moon shadow will be on Jupiter in a few minutes). Record the telescope movements during the night in order to be able to tag any picture. The TMC drivers have much more capability than what I am using currently. For instance, they could be current controlled for slews in order to set the stepper current exactly to the value that it needs without stalling. This saves a lot of energy. The TMC drivers have a feature called “Stall Guard”. This could be used instead of endstop switches (for 3D-printers, this is done frequently). Advanced options for tracking: siderial, solar, moon speed, ISS speed. Tracking in both axis (e.g. to compensate polar misalignments of atmospheric refraction) or just in right ascension. Commercial mounts do not allow much customization here. With slow slew speeds, 5V input via a USB-C cable is sufficient for the Pi + Motors. Usb-C and newer usb battery packs allow to output a higher voltage via USB. With an “USB-trigger”, the input voltage can be selected to my needs. Higher voltage allows higher slew speeds, but consumes more power. Autoguider support, or even better: simply connect a webcam via the Pi's USB connector and do the guiding on the Pi The Raspberry Pi touch screen could be used for telescope controlls Advanced German mount limits and meridian flip control (e.g. a warning about a necessary flip when driving to a specific goto target). An Android App, connected via WiFi to the Pi could be used as display alternative Language control (have a look at Mycroft, an open-source artificial intelligence). "Hey mount, please slew to the whirlpool galaxy!" Control the mount via SkySafari and Stellarium The Pi has a built in camera interface. How about an open source auto align? The Pi could look at the stars to align itself, which makes a lot of sense. I did already order a long focal length lens and monochrome camera from Arducam in order to do some experiments (the standard Pi camera has 3.5 mm focal length and is not really usable, although star imagining is possible). My first observation site is my balcony. And there, the real Starsense does not work at all. It always spin-loops on 2 alignment positions where the sky is covered by the roof – how silly is that?. This can be done better. Further, Starsense is doing only a initial alignment. It should update its position and accuracy over the time! I think I could do this better.
Besides all my ideas, the first and most important focus of the software will be:
Readability (therefore my choice of Kotlin), extensibility and open source. I like to have the Maths of the internal mount model clearly visible and understandable in the software. The calculations that are done within all our goto mounts are no rocket science. I admit, I am the nerd guy who wants to go the hard way and implement this from scratch.
I am looking for a good project name, do you have any suggestions? How about QuickStep? this is possibly too close to OneStep and would offend the creators of OneStep?
Does anyone of you have interest in joining my plan? Doing such a project in a small group would be more encouraging then just doing it for myself. And of course later on, I would appreciate if other stargazers would update their old mounts with my software.
Any comments on my project plan are welcome!
QHY5-II (MONO) - as new with original box and acessories.
Great guiding camera!
Check some photos at Astrobin or Stargazers Lounge under "paulobao".
I just quit the hobby and sold all my gear (FS102, G11, QSI532, etc...).
Price 70 GBP shipping included.
I lefted astronomy, sold my G11, FS102, QSI532....
From this site https://www.widescreen-centre.co.uk/starlight-instruments-feathertouch-focusers.html in UK, all these will cost 1147 GBP!
I am selling it for...basically free.
Everything in "as new" condition (actually one digital motor was never used). Very precise...MADE in the USA.
You can see my work in astrobin or Stargazers Lounge as "paulobao".
Thanks for seeing it.
I can send you more pics, references, etc.
Price: 250 GBP shipping included.
Hi A very newbie question here but for the life of me cannot figure it out.
Have an EQM35 goto mount linked through a skywatcher wifi dongle to a synscan pro app on my android galaxy s9 phone.
I van use the app but have a couple of things that are either greyed out or dont operate when pressed:
1. Point and track
2. Doesn't seem to track (tried the.moon) using lunar tracking but mount doesn't move
Thanks in advance for your help.