Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

wimvb

Members
  • Posts

    8,852
  • Joined

  • Last visited

  • Days Won

    5

Everything posted by wimvb

  1. V3 for me. In v1 the blue stars were a bit too obvious, while in v2 the arms of the galaxy seemed pushed a bit too much. As if a mask was used to enhance them.
  2. Me? None yet. But if the anemometer works out well, I'll probably get something similar. https://www.amazon.co.uk/Direction-Sensor-Garden-Signal-Aluminum/dp/B07VQ7XTQH/ref=mp_s_a_1_11?keywords=Wind+direction+sensor&qid=1578772188&sr=8-11 This is the anemometer I ordered https://www.amazon.co.uk/Bicaquu-Signal-Aluminum-Alloyed-Anemometer/dp/B07WD8LZD6/ref=mp_s_a_1_1?keywords=Bicaquu+wind+sensor&qid=1578772157&sr=8-1 I haven't found much information on these Chinese devices, so it's a bit of a gamble. The main advantage of i2c/one wire/rs485 devices is ease of connection and availability of drivers. Although as I said, the rs485 is still a bit of a gamble.
  3. I'm building a weather station based on an esp32 devkit board that has built in wifi. Sensors are atm: - BME280 pressure, temperature and humidity sensor (i2c communication) - MLX90614 ir cloud sensor (i2c communication) - Rain detector (on/off + analog) - Anemometer (rs485 communication, ordered but not received) I can recommend the esp32, it's cheap, small, and powerful. Plenty of tutorials online. You either use it with the Arduino ide, or micropython. Just one warning, the cheapest devices may not be as stable as the "official" models (espressif, adafruit).
  4. Gain affects the dynamic range, ie the span between the read noise floor and the brightest values that still show variation. Low gain = high dynamic range. I don't know how that affects total integration time. Normally nr of subs x exposure time = constant (to reach a certain snr). But at the same time you can recover dynamic range by taking more exposures. What I don't know is if this is two ways of saying the same thing. If it isn't, one could argue that the subs that are "used" to recover dynamic range, can't also be used to decrease noise. (Otherwise there would be a free lunch.) That would imply that if you increase the gain and also reduce the exposure time, you would need a longer total integration time than if you take longer subs at lower gain. But the benefit of higher dynamic range is why I normally use low gain. Even though my camera has a high read noise at low gain. All this is very murky to me, to be honest.
  5. In scientific articles it is common to show any structure that is very close to the background in an inverted b/w image. My guess is that this is done because either we se faint variations better that way, or because the printing process simply isn't accurate enough to show small differences in dark tones. I sometimes invert a copy of my Luminance master to see what is hidden in the background. I stretch this copy by moving the black point and white point untill I get good contrast in the areas of interest.
  6. Png + screen shot were fine, no need for xisf. It seems to me that you can reduce the exposure time, or the gain if you don't want to fill your hard drive/sd card. Unlike ccd, with cmos there's one more parameter to optimise, gain. And that parameter affects others, unfortunately. You could look around on astrobin what others have done with similar equipment.
  7. I agree that 100+ images will fill any hard drive in no time. Do you save lically on the Pi while imaging? Can you post a single sub with only STF stretch applied?
  8. Adam Block has a tutorial on lrgb combination, which is not restricted to Pixinsight, even though he uses it to make his point. It shows his workflow to preserve colour. https://adamblockstudios.com/articles/Fundamentals_LRGB Yes, stretching will reduce colour. If you stretch hard enough, eventually any colour will turn into white, because the dominant colour will max out while the other colours can be stretched further. In the process of producing pleasing images, we apply methods to balance colours , such as background neutralisation and (photometric) colour calibration, but then we use tools such as scnr or Hasta la Vista green, as well as selective colour saturation tools. The image will not be scientifically correct anymore after post processing. The processing is not only limited to saturation of colours but also to contrast. These two aspects work together to form the final image What many of us do, the rendering of a natural scene to create an aesthetically pleasing image that will move the viewer, is often closer to art than it is to science. For me, I admire the physics that creates the subject, and the mathematics in image processing appeals to me. That's why I like Pixinsight. But the final image is no more a true rendition of reality than an impressionist's painting of a pond and trees. And while I think it is important to understand how colours are affected during various stages of processing, I do not let that limit me towards a final image.
  9. Arcsinh stretch preserves colour, but I'm not sure it preserves colour ratios.
  10. Mark Shelley developed arcsinh stretch also for PS, afaIk. Unless you want complete your journey to the darkest side.
  11. For ordinary rgb imaging, where you stay safe of the read noise floor, I'd say no problem. But nb imaging is a bit trickier, since the signal levels are much lower. With a f/2.5 or f/3 lens, I would definitely try it. The theory is the same for cmos as for ccd; you need to stay above the read noise floor. Ie, your exposures need to be dark noise or light pollution noise limited. Cmos (except the very new generation) generally have higher dark current (noise) than ccd. This, combined with the lower read noise, allows the shorter exposures. But you need the total integration time. "There still ain't no free lunch."
  12. Yes. Step down ring for filters will do the job. You could 3d print a ring, but if its inner edge isn't smooth it will cause its own diffraction.
  13. That was just before he sold the mount. It also had some stiffness/smooth running issues, which we resolved. Furthermore, he used an SBig autoguider which doesn't produce statistics. He could only see guiding issues in the star profiles of his subs.
  14. The central obstruction causes diffraction, and so may any mirror clips/ focus tube intrusion. Colour is stronger in this halo because the brightness is less. At high brightness, colours are less saturated. I understand your argument. Maybe it's the blocking action of the filter that separates colours better, giving a "cleaner" image. If there is a gap between the passband of colour filters, as there is betwen the green and red Astrodon filters, than obviously those colours that are blocked by both will never be registered. Basically you exclude that colour from the final image. And also, if a blue filter extends further towards the ultraviolet than another, it will register more signal and show more "blue" , because during processiing, the near UV values are mapped to blue. This doesn't take into account the camera's spectral sensitivity of course. That's my take on it, but I may be wrong. I just noticed it on images (galaxies mainly) on Astrobin. hence the:
  15. That's the challenge with cmos (or in general, really). Faint signal seems hidden in the noise, but with many exposures, it's revealed again. Jason Guenzel on Astrobin once wrote that he never shoots fewer than 80 subs per filter, and would rather have 100 or more subs. Because the dynamic range of cmos cameras is low at high gain, you need many subs to recover it. 135/2.5 = 54 mm aperture. 200/4 = 50 mm aperture. But the 200 mm lens has a different pixel scale. Light gathering power on a pixel is proportional to (rD)^2, where r is pixel scale and D is aperture, or (pF)^2 where p is pixel size and F is F-number. The faster lens is better as far as light gathering power is concerned. But if the longer lens is of better quality, you will still get a better image out of it. You could use a stop down ring for the 135 mm lens to see if that cleans up any diffractions. If the ring takes 4 mm off the aperture, you still have a f/2.7 lens. You could go down to a 45 mm aperture and still have a f/3 lens. That is, if the diffraction around Alnitak is caused by something in or on the lens.
  16. Astro-Baby's guide is the best, imo. Start with the easiest part: adjusting the worm gear. If that helps, you're ok. While you adjust the worm gear, constantly wiggle the dec axis to find where it moves free enough but doesn't have backlash. If adjustment of the worm gear isn't enough, you may need this: http://www.astro-baby.com/heq5-rebuild/heq5-d1.htm I've done a strip down and aRowan modification on a HEQ5 for a friend, and taking this mount apart is straight forward with the help of the guide. But ther's no need to strip down more than is absolutely necessary. Especially if you're not familiar with this kind of work.
  17. You were faster on the keyboard. An image where a reflector was used, may show more/stronger colour variation in the stars than an image taken with a good refractor. This is because reflectors push more light in the star halo than a colour corrected refractor. And for some reason it seems that images where Astrodon RGB filters were used often have more vibrant colours than images taken with other filters. Astrodon filters have a much wider rejected wavelength region between green and red than other brands. These filters create much deeper reds. It's either a characteristic of the filters or of their proud owners. Or maybe it's just me.
  18. The sensor has its highest dynamic range at low gain (0 - 20), that's what I normally use with my ASI174MM-Cool. But that's for rgb imaging. Most people use a high gain (250) for nb imaging, probably to keep the exposure time short. Personally, I'd start with an exposure time that has Alnitak somewhat over exposed (but not too much), and the flame/horsehead very weak in a single sub. Cooled cmos do better with many but short exposures than fewer long exposures.
  19. Nice to see you in this section again, Gina. And also nice to see different clouds than the terrestrial kind. I think that the diffraction may very well be caused by minor flaws/intrusions near the edge of the lens. How old is the lens, and especially, how clean is it on the inside? Alnitak is so bright compared to the rest in this image, that it doesn't take much to cause diffraction.
  20. I would use scnr green (hasta la vista green in PS speak) to lift the colours. Very nice set of images, btw.
  21. As @vlaiv wrote, camera sensitivity determines a basic balance in the raw data, as do sky conditions. But all this can be, and is, adjusted during post processing. To get to image 2, I would use an LRGB workflow, even if the data is from an osc camera. In my experience you have to push the colour quite hard to get that level of saturation.
  22. Neither was I, until I checked it. I was aware of the 0.07"/2 mm version, which I thought was odd since 0.1" is a standard. Then I found some with 0.1"/2.54 mm pitch on line but these are interlaced at 1.27 mm. You could try to bend the outer pins a little. Or use the copper side of a v board and surface solder them.
  23. I believe its purpose was to balance the rocket. 😉
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.