Stargazers Lounge Uses Cookies

Like most websites, SGL uses cookies in order to deliver a secure, personalised service, to provide social media functions and to analyse our traffic. Continued use of SGL indicates your acceptance of our cookie policy.

Welcome to Stargazers Lounge

Register now to gain access to all of our features. Once registered and logged in, you will be able to contribute to this site by submitting your own content or replying to existing content. You'll be able to customise your profile, receive reputation points as a reward for submitting content, while also communicating with other members via your own private inbox, plus much more! This message will be removed once you have signed in.

  • Announcements

    SGL 2017 SP

wimvb

Advanced Members
  • Content count

    2,734
  • Joined

  • Last visited

Community Reputation

1,594 Excellent

4 Followers

About wimvb

  • Rank
    Brown Dwarf
  • Birthday 29/07/63

Contact Methods

  • Website URL
    http://wimvberlo.blogspot.se/

Profile Information

  • Gender
    Male
  • Location
    Sweden (59.47° North)

Recent Profile Visitors

2,173 profile views
  1. It seems like the Trapezium is coming along nicely.
  2. Indeed: walking noise. Careful calibration can reduce it, but the best remedy is dithering. As for these images, optimise your dark frames and use cosmetic correction (available in both DSS and PI) to remove outlier pixels. Then use aggressive pixel rejection during stacking.
  3. You could try to get the same type of focuser you already have (2nd hand from someone who has upgraded from a SW focuser?), and use the drawtube of that for your ZWO, and the shortened tube for your dslr.
  4. NGC 4051 is a spiral Sifert galaxy in the constellation Ursa Major, at about 48 Mly from earth. It covers about 5 x 4 arcminutes of the night sky The core of this galaxy contains a supermassive black hole. Data from the Liverpool Telescope, La Palma (2 m aperture and 0.28 "/pixel resolution)
  5. In narrowband imaging, you can't maintain colour balance. Colour balance means that you define a neutral background (no particular colour, or the same amount of red, green and blue), and a white point (ideally a star that is exactly white (red = green = blue = 1). But in narrowband, you don't have red, green and blue. Instead you only have the colour(s) from the narrowband filter(s). For Ha this is deep red. So, the 'natural' colour of an Ha image would be deep red. On the other hand, hydrogen (the 'H' in 'Ha'), also emits weakly in the blue/cyan, which is why with a dslr, most Ha targets can look purple. This blue/cyan wavelength is named Hb. Some astrophotographers add Ha to red and add a little of the Ha signal to the blue channel, just to mimic Hb. The origin of narrow band imaging is scientific, and different colours are used to indicate the presence of Hydrogen (Ha), Oxygen (Oiii) and Sulfur (Sii). One such combination is the Hubble palette. But neither of these combinations even resembles a 'natural' colour. That's why they are called 'false colour images'. And that's also why colour balance isn't possible in narrowband imaging. In your image, you can make the colours pretty much the way you like them to be. But to get a conventional colour scheme, or palette, you should google 'Hubble palette' If natural colour is what you're after, you need to mix the narrow band images with RGB images. These will have a natural colour, and you use the narrow band images to enhance the 'natural' colours, to indicate the presence of H, O and S in a 'natural' looking image. My advice: google for narrow band images of the nebula. Find out which colour scheme you like best, and create an image in this colour scheme. This will get you started, and if you want another colour scheme, you can allways create a new image from the masters. BTW, if you plan to do most of your processing in PixInsight, you should definitely get the book 'Inside PixInsight' from Warren Keller, and also have a look at the (video) tutorials from Kayron Mercieca (lightvortexastronomy.com) and Gerald Wechselberger (http://www.werbeagentur.org/oldwexi/PixInsight/PixInsight.html)
  6. You probably already know this, but here goes anyway: Masters are allways gray-scale images. If you shoot mono with filters, your images will show how much light intensity is collected on the camera sensor at a certain wavelength. The wavelength (= colour) information is only provided by the filter, not by the mono camera. That's why it is called mono-chrome: one colour. Your Ha master, Oiii master, Sii master are all monochrome, or black and white images. The same goes if you use RGB filters, because you collect the intensity from only one colour (R, or G or B ) at a time. If you take a colour image, you can extract the different channels that make up the image, and end up with three slightly different gray-scale images. It is when you pop these three images back into the channelcombination tool, that it will convert the monochrome images into rgb images, by using one image for the Red information, one for the Green information, and one for the Blue information. If you want to do more elaborate combinations of mono images, like the Hubble palette, or HSO, or HOO, or whatever pleases you, you can do so by using PixelMath. BUT, you need to use a combination expression for EACH COLOUR CHANNEL, otherwise the final image will end up the same sort (gray-scale) as the source images. You need to uncheck the tick box 'Use a single RGB/K expression'. And to get a colour image out of it, you need to specify the Color space as 'RGB color'. Here's a screenshot showing the settings in PixelMath: The three images going into the expression (here called Ha, Sii, Oiii) can be combined in anyway you like, but for a colour image, there has to be an expression in at least two of the text boxes. Otherwise your image will be a monochrome image in one colour (all red, most likely) In this example, red will be Ha, green will be the average of Ha and Sii, and blue will be the median of Sii and Oiii. This is probably going to be a funny looking image, but it shows what freedom you have in combining. Also note that the Color space is RGB, which means that your resulting image will be a colour image, and not a mono image.
  7. The space shuttle had an odd number of control computers (3 I think), just to avoid the risk of a 'hung jury'. Unfortunately, if you use quad core computers, you can never reach an odd number. But you could have twelve jurors. And now the important question: should I add a big to this? Have you checked indilib.org? The software is used for robotic observatories. It does a graceful exit (= park scope and close roof) automatically when connection to the client is lost.
  8. In a horizontal orientation, the roof's motor only has to overcome friction. If a roof closes by gravity, the motors will have to overcome friction plus gravity to open it. This generally costs more, since you need a stronger motor. The usual solution is a UPS, or a large enough battery to park the scope and close the roof. Probably a cloud sensor and/or weather station, so that you can monitor the situation. The budget solution is a good neighbour at the remote site, who can help in case of an emergency.
  9. For this image, you probably have saved the master Ha and master Sii images. I only use PixInsight, so here's how it's done there: 1. load the Ha master and the Sii master into PixInsight 2. open the StarAlignment tool in PixInsight (under processes > imageregistration) 3. Load the Ha master as reference view, in the top section of StarAlignment 4. Select the Sii master by clicking on it 5. Apply StarAlignment by clicking on the square icon in the lower left corner of the StarAlignment tool When the Sii is aligned, a new image is created called '..._registered' (where '...' is whatever the Sii master image's name is) 6. You can now close the original Sii master image, and work with the registered version as you did before; i.e. combine with the original Ha image Here's what the tool looked like when I realigned the channels in your image Btw, I would avoid working on astroimages when they are in jpeg format. This format is just too crude for the kind of processing you would normally do. Of course, when you only crop and image or add a signature, jpeg is ok. But for all other work, I would use fits, tiff, xisf, or a similar file format.
  10. That fine academic institute known as Google, has free online courses. Many are hosted by the illustrous university of Youtube. Seriously though, when you decide for a capable software package (read: PixInsight ), you'll find plenty of tutorials on the internet, and fine people here directing you to them. (Oh, and yes, that goes for PhotoShop as well.) Have a look around at what's available, compare the options, and try before you buy. As with hardware, you have to figure out what works best for you. Any decent processing software can produce masterpieces or garbage. Comparing finished images tells you more about the person doing the processing, than about the processing software. But also as with hardware: be prepared to invest both money and time.
  11. If you upload the image to astrometry.net, my guess is you're very likely to find a number of faint fuzzies. The universe is so uniform that it's hard to miss them. The light is just 'drownded out by all them stars.' Edit: but not by image solving in PixInsight. That solver only returns stars, no galaxies
  12. That is basically correct. The only thing you missed is that the Ha and Sii are not aligned to each other. You can either align all subs with reference to one sub, or after the stacking of Ha and Sii individually, use the Ha (or Sii) as a reference for aligning the other. When I process data from the Liverpool telescope, I use the former method: I align all subs in one go, with a certain sub as a reference. After that I combine (integrate or stack) the subs from each filter separately. Then I use channelcombination in PI to create a colour image from the different filters. Btw, here's your image (jpeg) restacked. notice the edge on the left and top. I separated the channels (r, g, b), used the red channel as a reference. Then registered the green and blue channels, and recombined them. Because in your original image the red channel was a little missaligned towards the lower right, the blue and green where shifted in this direction during my restacking, causing the red edge left and top.
  13. Lovely.
  14. Just before knocking off? Lovely picture. Sometimes the seemingly simple things are the hardest.
  15. AP

    The link shows how I've attached my guider (st80+zwo asi120) to my sw 150pds. The two clamps attach to the rings of the 150pds. I've also replaced the small mounting bar of the st80 with a vixen style dovetail bar. Works great.