Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

wimvb

Members
  • Posts

    8,770
  • Joined

  • Last visited

  • Days Won

    4

Everything posted by wimvb

  1. Yes, saw that a few days ago in my newsfeed. The cross was discovered back in 2021, but has now been confirmed. Unfortunately it’s much too small to capture with an amateur telescope. The cross is 5” wide at most. Only 5 pixels at 1”/pixel
  2. Still no astro darkness for another two to three weeks. So, apart from tinkering in my observatory and things to do around the house, I'm reprocessing old data with new tools in the PI toolkit. This is data that I collected back in 2021, Melotte 15. The Xterminator suite of processes allowed me to create a version that is closer to what I had intended when I captured the data. BlurXterminator for sharpening, StarXterminator to allow stars and nebula to receive their own processes, and NoiseXterminator to clean up. I even tried GHS stretch but found a much simpler solution to stretch the nebula. After star removal, I simply used Histogram Transformation ("levels" in PS) to bring in the white point and black point for each channel separately (avoiding clipping). After that, some local contrast enhancement and colour saturation boost. Then brought the stars back in (stretched with masked stretch). I goofed around with the data some more and found a simple way to fake a Hubble SHO palette. If you have a standard HaRGB image, you simply invert it, apply SCNR green, and invert it back. Voila! The result was a little too "mustard" for my taste, so I blended 50-50 with a version in which I used PixelMath to mix the channels of the original image. Red: $T[0] Green: 0.8*$T[0] + 0.1*$T[1] + 0.1*$T[2] Blue: (2*$T[2] + $T[1])/3 Personally, I like the natural look of the HaRGB image better. Btw, that blue patch int middle of the image is not an artefact. It shows in Aladin's SDSS plates. But I couldn't find a reference or ID for it.
  3. This interesting story showed up in my news feed this morning. https://petapixel.com/2023/08/01/canons-new-camera-can-see-subjects-from-miles-away-even-at-night/ The camera uses a so called SPAD (Single Photon Avelanche Diode) sensor. Avelanche photodiodes (APD) are nothing new, they are the most sensitive photodiodes out there. They work on the principle that a photon "creates" a free electron, which is then accelerated by means of a high voltage. The electron gains speed and knocks other electrons from their position in the sensor material (silicon). These electrons are also accelerated, etc. In effect this auses an avelanche of electrons. APDs are mainly used to detect digital signals in optical communication systems. This is the first time I've seen their use in "high resolution" (3.2 Mpixels) consumer cameras. Btw, of those 3.2 Mpixels, only 2.07 Mp are effectively usable on the 13.2 × 9.9 mm size sensor. The main problem with this new gadget is its price tag. According to the article, you'd have to put $ 25k on the counter if you want one. That's more than many of us have invested in their entire AP gear.
  4. I’m not an ASA owner, but one thing that I thought of when reading your post is this. Your mount must have some firmware that any driver and the ASA software must communicate with. As such, there must be a protocol for that communication. Either this protocol is still used in the new mounts, or ASA may be willing to share it with their user base. Without such protocol, it will be almost impossible to write software for the mount. You could contact ASA to find out if they are willing to share information on their obsolete mounts. As an alternative, you could try to get in contact with the developer of the ASCOM or INDI driver. If that is not ASA, they may be able to help you. If INDI is an option, post your request on the Indilib forum.
  5. Having used the eq3 pro with a 5 kg 150PDS and dslr, I can only confirm what others have said. I find that the small mount (with aluminium tripod!) and large Newtonian is a very odd combination to offer as a beginners astrophotography setup. But the EQ3-Pro combined with a short fl refractor, can make a very handsome travel kit. The eq3 only has 4 bearings, and those hold the worm in place. The eq3 relies on its teflon shims in combination with "chinese black goo", aka high viscosity grease, for smooth performance of RA and DEC. Some time ago I tried to install needle bearing on the RA and DEC shafts, but they built too much. I followed this video. A belt mod would be interesting, and could improve guiding. From my experience with the belt driven AZ-EQ6, high frequency oscillations are mainly due to improper belt tension. You'd have to figure out the gear reduction in the current gears and source appropriate timing wheels, but then it could work with its hand controller. If you can't find the exact hardware, you'll have to make your own motor drivers.
  6. You can do mosaics without computer, but it requires some planning. First off, align your camera such that the sensor's edge is parallell to RA. Center on a star, and take a 30 s exposure (no guiding). 5 seconds in, slew RA at 1 x sidereal with the left or right button on your hand controller. This will create star trails. After the exposure, rotate the camera, and repeat until the star trail is parallell to either the long or short edge of your sensor. Next, use a planetarium program to plan your mosaic. You can use the RA/DEC grid as a guide. When it comes to execute the plan, use stars in your field of view to align. Most astrophotography software will automate this process (except for camera rotation). I use Ekos/Kstars on Linux, but I believe that NINA on MS Windows, can do it too. So can sequence generator pro (SGP), but that is paid for software. Ekos and NINA are free. Good luck. Btw, getting a camera with a larger sensor probably won't work. The clear viewing circle of most scopes won't cover a full frame sensor, so you will get lots of vignetting instead. A reducer may have the same effect.
  7. Yep. And the posted photos of the hand controllers show different versions. The original post had 2 capacitors near the connector, with one blown. The newer version has no capacitors in that area. In case someone stumbles upon this old thread in the future ...
  8. That used to be how I would tackle high dynamic range too. But recently I got good results without masks, using the newest kid on the block, GHS in PixInsight. For this type of target, it's like curves stretch on steroids.
  9. The same is common for galaxies, where the ID is a combination of a prefix and RA and DEC in hrs:mins:secs+/-degs:mins:secs J22293917-1945018
  10. Congratulations to all the winners. Those are great entries.
  11. That knowledge is called experience, and there's only one way to get it. I don't know Siril, but many a masterpiece in AP has been produced using PS, so the software is capable, and there are tutorials available. I believe that Adam Block still has some on his web site, probably behind a pay wall. Early on in my AP adventure, I chose to buy PixInsight, and to this day it's the only software I use. Any software assumes some basic workflow and is based on some general principles. Some of these you already know from your other photography work; highlights, mid range, shadows. In AP terms, high brightness (high signal to noise ratio), mid range and background (low signal to noise ratio). Then there is luminosity (luminance) and colour (chrominance), familiar if you do LRGB imaging. In PI, you also have "wavelets", which are levels of detail (similar to harmonics in sound, or Fourier series in signal processing). If you keep such principles in mind while you process images, individual steps make more sense.
  12. When we learn new things, we start by copying and repeating what others have done before us. This is the same for a child that is learning to write, math or mastering a musical instrument, as it is for us learning astrophotography and image processing. Over time we start to grasp the basics and we figure out the principles behind techniques. That's when we become creative. After enough practice, we get fluent, and the basics become second nature. One problem with the self study of AP is that there are so many different software packages, and so many different teachers (youtube tutorials mainly) available. So where to start? Which teacher is best Another problem is that when we start in AP, we usually start with poor data and no knowledge, while tutorials usually assume good quality data and basic knowledge. As a beginner, you're fighting an up hill battle. So, to learn AP processing, try to find good quality data for excercise. One repository is in this forum. Some time ago, FLO released high quality data from their IKI observatory. This is excellent material to learn image processing. What's more, people have published their results using various software packages, on this forum, so you can see what to expect. https://stargazerslounge.com/forum/294-iki-observatory/ A third problem which many beginners make, is that they are too impatient. Rather than learning what individual steps in a workflow, or the individual processes in a software package do, and how to use those that are available, they jump from one software to another trying to find quick fixes. Hence the popularity of ever new software (EZ suite in PI, Topaz sharpening, graXpert). Choose one software, and learn that before you mix other software in your workflow. While you learn the art of image processing (despite being highly technical in nature, it is an art), you also need to learn the other half of AP, data acquisition. For this, have a look at this resource. It is dated, but the principles still hold. https://www.firstlightoptics.com/books/making-every-photon-count-steve-richards.html The number one problem with beginner's data in my experience, is the lack of it. Beginners generally underestimate the amount of time it takes to gather good quality data. When I started in AP (and others have confirmed it was the same for them), I started out as a space tourist, shooting as many different targets as possible on a single night. (Like tourists visiting as many places as possible during their short vacation.) Now, I try to capture at least 10-15 hours of data per object, sometimes spending several weeks collecting that data. People say that PixInsight has a steep learning curve. Anything has a learning curve. How steep you find that depends on how fast you are going, or want to go. Rather than racing for the peak, enjoy the climb.
  13. The green line is just the number of subs (x-axis) up to that fwhm (y-axis). There were a few subs with very low fwhm. The first increase (first half of the diagram) are actually the subs that you collected during the second night. The right half of the green curve represent the subs that you collected during the first night, where fwhm was generally higher. The graph shows that about 20 subs had a fwhm larger than 5 pixels. These are the peaks in the purple line. When you stack, you can use a weighting factor that takes fwhm into account. Subs with a low fwhm will weigh in more than subs with a large fwhm. In PixInsight I sometimes use the formula weight = (max - fwhm) / (max - min) where max is the highest fwhm, and min is the lowest fwhm. When fwhm = min, that sub's weight = 1, and when fwhm = max, that subs weight = 0. As @ONIKKINEN wrote, the first night was bad. This can have several causes, of which you can control a few. What you can control is focus and to a point guiding. Either use a focus motor for autofocusing, or a Bahtinov mask. Focus can shift during the night (either due to temperature drift or the focuser slipping). If you look at the background median (should be a similar diagram in Siril), this can tell you if there were high clouds, low clouds passing by, or if the target was lower in the sky (more sky glow). Plotting the altitude that your scope was pointing at, can also give an indication. The capture software that I use (Kstars/Ekos) plots the telescope's altitude as well as the image's median value during an imaging session. As the targets altitude decreases after the meridian, the background median value starts to increase. What you can't control (but can monitor) is atmospheric conditions. If seeing is bad, the guide curve will be noisier and the guiding error (rms) will be larger. This will give a higher fwhm value. On the other hand, high thin clouds (bad transparency) can give excellent guiding with low rms, but fat stars because their light is scattered in the atmosphere. Passing clouds raise the median background value and give fewer stars to guide on. The guide star signal will also be lower, so PHD will report lower SNR.
  14. I think that's the main cause for your results. Unmodded cameras don't pick up much Ha, and the Elephant's trunk is mostly an emission nebula. I tried pushing the data harder, but ended up with that mottled background. I think you may get marginally better results with aggressive dithering and more data, but in all honesty if you want to capture emission nebulae, you need a camera that is more sensitive to Ha. Larger nebulae are most suited for your telescope's focal length of 440 mm (according to the fits header of your image).
  15. yes, but how many pixels did you dither? I believe that your image scale is 1.8 "/pixel. A 12 pixel dither would be about 22 arc seconds (1.8 x 12). I don't know how many pixels that is on your guide camera (22 / guide camera pixel scale)
  16. I played around in PI with your image, and came up with this. Your image has a common DSLR signature: colour mottle. Tony Halas did a presentation on DSLR astrophotography almost ten years ago, where he discussed this. His remedy: dither aggressively, about 12 pixels at least. Here's the link to his presentation https://www.youtube.com/watch?v=PZoCJBLAYEs&t=3s
  17. @symmetal I got the reference for the dwarfs from the article I linked to. They’re in one of the figures.
  18. Do you want ”rotor trails ” in stead of satellite trails? 😉
  19. Thank you, Alan. Much appreciated. I have a habit of researching my images on Aladin/Simbad either before I capture the data, or during processing. That's where I found the reference.
  20. Thank you, Sunshine. It's even more special when those galaxies form an interesting composition, as here.
  21. As the title says, this is a rework of an old image. I collected the data for this one back in the spring of 2020, so it must be one of the first from my observatory. New tools and a little more experience allowed me to pull out more detail and better colours. Dwarf galaxies that are forming in the tidal tail of ngc 4216 are just visible, but to show the tidal tail itself, I would need more data.Technical details: gear: SW MN190 on SW AZ-EQ6 with ZWO ASI174MM-Cool camera and ZWO LRGB filters Total integration time: 13 hours Processed in PixInsight with BXT and NXT. I used GHS stretch just to test it on galaxy images, but imo, it doesn't make much of a difference compared to histogram transformation and curves transformation. In the inverted and super stretched luminance master, several more interested features are visible. I've annotated the image with reference to this article: https://iopscience.iop.org/article/10.1088/0004-637X/767/2/133 A, B, C, and D are dwarf galaxies. The galaxies are the darker areas just above the letters, except for galaxy D which is just below the D. The darker structure F4 is a tidal stream extending from galaxy VCC 165.
  22. I haven't imaged the Orion nebula that often, because of various reasons. For me it's a bit low in the sky and obstructed by a few trees near my observatory. This means that I would have to toss a substantial number of subs, of a sequence. Another reason is that this target is so common, that it doesn't offer anything new. But, in 2021 I pointed my scope at this nebula for about half an hour and collected 72 30 s subs in R, G, and B. This is what I managed then Processing this target is a challenge, and I've now procesed the data a number of times, as new tools have become available. The newest kid on the block, for me, is GHS transformation in PixInsight. I followed Adam Block's YouTube tutorial and, in combination with Russel Croman's Blur- and NoiseXTerminator, this is the result. The objective was to keep detail and colour right into the core, while showing as much as possible of the fainter regions. After several passes of GHS, I almost got the nebula where I wanted it, but I found the core a bit flat. There's a lot going on near the Trapezium that I wanted to show. So, with a range mask exposing just the very core, I used MMT to add a little more local contrast in that area. As I wrote before, the data consists of 72 subs of 30 s, with camera gain at its lowest, in order to keep the highest possible dynamic range. Technical details: SkyWatcher MN190 with ASI294MM camera at 0 gain and -10 C temperature, Optolong RGB filters Exposure time: 30 s Integration time: 36 minutes. More subs might have given me more signal in the weakest areas, but I started to see horizontal banding in the image. Probably the exposure time is a bit too short (my normal exposure time at this gain is 300 s), and the read pattern starts to show. So I'm not sure if more subs or a higher gain would have made much of a difference. Using the high conversion gain of the camera would have reduced read noise, but it also would have reduced the full well depth.
  23. As @Clarkey wrote, it's not so much the weight. It's the size. A larger telescope will catch more wind, which will affect a mount's tracking. Unless you can have the telescope in a dome, you're probably better off with the 200PDS. With cmos cameras that have "small" pixels, you'll be hard pushed to see the difference in image quality between a 200PDS and a large telescope with a 400 mm mirror. Unless you have that telescope on a dry and cold mountain top. But if you want a managable scope with long fl, you should probably check out RC scopes.
  24. According to a recent article, Betelgeuse is expected to go supernova within decades. We might experience it during our lifetime. See drBecky's latest video for details.
  25. Very nice. Is your camera astro modded (ir cut filter removed)? It seems to pick up Ha quite nicely. When you calibrate the images in stacking, try without darks, but activate "cosmetic correction" in DSS. Darks may only introduce more noise, without correcting anything. With an uncooled camera, you should always make sure they don't do more harm than good. In post processing, avoid making the background very dark, ie, avoid clipping pixels (difficult to see on my mobile device). The Milky Way is quite bright compared to the rest of the sky.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.