Jump to content

wimvb

Members
  • Posts

    8,946
  • Joined

  • Last visited

  • Days Won

    6

Everything posted by wimvb

  1. Not in a reflector. Difference in focal point is a result of dispersion, ie difference in refractive index for different wavelengths. But refractive index isn't relevant for mirrors, because the light never goes through glass. If you have a corrector, there may be minimal chromatic abberation. Most ca is generated in the objective of a refractor.
  2. With a reflector, you don't need to refocus between filters, and with an expensive apo refractor, you paid for the convenience not having to. With an ed doublet you probably have to refocus. Some telescope control software have built in focus offsets for each filter, so as long as the focuser is motorised, it won't be much hassle. I tend to shoot about 40 minutes per filter for rgb, and longer for L, then repeat that sequence as long as the weather allows. I aim for at least 100 - 150 2 minutes exposures for L, and 30 - 40 4 minutes exposures for rgb, per filter. Cmos cameras thrive on large numbers of subs.
  3. Put all the calibrated subs in blink and look how the stars move, from first sub to last. Is there a rotational movement? If so, then your dithering isn't enough. Star movement should be random if you dither. What is the result of star alignment? If you see an increased rotation of subs, you have field rotation due to polar misalignment. Again, dithering should cause random movement (or a spiraling movement, depending on your dither settings). To remove bad pixels, if you can't use darks, apply cosmetic correction during the calibration process. Set a fairly strong rejection for bright pixels.
  4. The asi1600 is very popular, so difficult to go wrong there. You can use the quoted power supply. Just invest in a small plastic box with lid. Cut two slots, one on either side, for the cables. Then put the extension lead's contact inside the box and the lid on, when outside. For a guide, you can get "making every photon count" by Steve Richards, available from FLO. Good luck & have fun
  5. Just confused, maybe. Adam Block has a channel on youtube worth subscribing to. He also has a web site with lots of material. As a professional, he charges for the best parts of that material, but there are a plenty free tutorials https://adamblockstudios.com/ http://caelumobservatory.com/ Andreo has this site http://www.deepskycolors.com/rba_home.html It also has interesting tutorials with more advanced content.
  6. His full name is Rogelio Bernal Andreo. We mean the same person. 😀
  7. I would take Andreo's (or Adam Block's) advice over other's any time. Both are established astrophotographers who do this stuff for a living. And both have extended experience of Pixinsight and Photoshop.
  8. Using larger samples is always preferable. 5 pixel as default, only gives you 25 pixels in each sample. Using 15 pixel size, gives you 225 pixels. Already a huge improvement. But, always look at the samples. Black pixels in a sample means that these are not taken into account for modeling. That's where the advice "don't use samples over stars" comes from. With larger samples, you can afford to lose pixels due to stars in your samples. A few warnings re this video. The guy uses the same settings, regardless of image. Always let the image dictate which settings to use. Keep the model relaxation parameters as low as possible, but examine the samples. Avoid samples that have a colour bias (in colour images). Don't just blindly place samples. Put them where they are needed. In the video, samples are placed over the galaxy at top right. A sure way to ask for problems later on. To remove vignetting, it's better to place samples in each corner and a bit in along the diagonals (8 samples). Also a sample on each edge (+ 4 samples). Use division as correction method. To remove linear gradients, place a few samples along the path of the gradient. Always avoid areas of nebulosity (which he doesn't in the HH nebula). Use subtraction. If you have a combination, use the symmetry tools and set fewer samples. NEVER discard the background model without examining it. Not examining the model is plain bad practice.
  9. Selecting areas in PS is equivalent to using a mask in PI. In this case a lightness mask, probably contrast enhanced. Reduce colour saturation is the same in PI. Reduce colour noise is using any of the noise reduction tools in PI in Lab mode on chrominance. I tend to use colour noise reduction early on in a process, and colour desaturation in the very end.
  10. For dslr images, there's rawTherapee. You can even do flatfielding and also dark subtraction, I believe. It's free and cross platform.
  11. I just have to ask, how do you get those long and narrow star spikes? I never managed that with my 150PDS. Great image.
  12. Isn't that what Rodd did? You just have to provide your own set of sliders. And the "moment", well time is relative anyway. 😁
  13. You may have the tolerance too tight. Set that to about 5 - 8%. If you go for an average level of 20000, set tolerance between 1000 and 1500. That way Ekos will dial in one exposure time and shoot the whole sequence. It will speed up the process. I set my average to 27000 and a tolerance of about 5%. I then determine a ball park figure for the exposure time for each filter, and load that in a sequence. From then on it's fire and forget.
  14. You have can safely use DBE in the non linear stage. Just check the normalize box in the corrections section. This keeps the median value of each channel and you don't need to redo colour calibration then. A neutral background can still appear to have a colour cast. This is a result of one channel being more noisy than the others. The histogram peaks of the channels coincide (neutrality) but the noise increases the number of bright pixels in one channel (wider histogram), making it look like you have a colour cast. The proper remedy is (chroma) noise reduction. Desaturation at the end of the workflow also works.
  15. To me, this image isn't about getting the dust to stand out as much as possible, or trying to make it look like mainstream images from well known astrophotographers. When I see this object, I see a bright light through a tunnel of dark, looming dust. There should definitely be a separation between that dust and the background, but further than that, this object lends itself very well to more artistic interpretations. I'm just not there yet. I like the interpretations from Block and Göran, but at the same time I want to take this data in a different direction. It can be an example of the freedom we astrophotographers have in processing.
  16. You probably went a little harder on the HDRMT. I used a toned down lightness mask, with the white point dailed back to below 0.5. This keeps HDRMT under control.
  17. After a week of grading papers and a STEM competition, I finally got the time to complete this image. I think I kept it close to your interpretation, Rodd. Not intentionally. I wanted the reflection nebula to be the attention grabber. I reduced the smaller stars a little, but not too much. After all, it's part of the Milky Way.
  18. For dslrs, 12 - 15 pixels is recommended (based on a talk held by Tony Halas). The problem with large steps is that it can start to affect stacking edges. If you do 50+ subs, those 35 random pixels can add to quite a lot, and you need to crop a wide edge after stacking. But as you say, you need to make sure that there is a clear difference between subs.
  19. No, it doesn't. This is what I got with default settings of PCC, with fl 1000 pixel size 4.3 and a background preview as background reference. Arcsinh stretch and lifting the outer regions of the galaxy a bit. I see now that I boosted the colour in those regions a bit too much. Processed in PixInsight 1.8.5 Btw, I would have framed the image a bit different to keep that small galaxy at the bottom in the frame.
  20. 51 according to the fits header and the image identifier. I found a fl of 1010 mm, not the 2000 of the original post.
  21. I just upload it as an ordinary image. Because no web browser supports tif or xisf, it will show as an icon.
  22. Very nice image for such a short time. A few pointers. As @MarkAR noted, longer and more exposures will reveal more detail and bring the noise down. To get a repeatable framing, align the long side of the camera sensor with the RA axis. Take a 30 s exposure, and about 5 seconds in, start slewing RA+ at 1x sidereal. This will cause star trails along the RA axis. Then rotate the camera and repeat until the streaks are parallell to the long side of the image. This will result in more consistent framing from night to night. If you have the opportunity, keep the camera attached to the scope when shooting several nights in a row. To get round stars across the entire frame, you will need to invest in a field flattener.
  23. What are your background levels? You may need to increase the allowed level. Alternatively, load up the tif here (non drizzled or it will be too large, or just a preview of the original) so we can have a look.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.