Jump to content

vlaiv

Members
  • Posts

    13,265
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. My guess would be that it is due to synthetic flats / software background removal.
  2. IMerge for planetary (already processed images) ImageJ different stitching plugins for DSO and fits prior to processing
  3. Precise focus is really not important for guiding. Most guide scopes are fast achromats and stars will be bloated anyway. Some defocus even helps with seeing effects. Only time when precise focus is beneficial for guiding is when one is guiding on a single faint star and good focus provides good SNR. With modern PHD2 and multi star guiding - this is really not issue and it is enough to set decent focus at the beginning of the session. Any focus drift will not have impact on guiding.
  4. If you really want to examine brightness of individual stars - you need raw data that is still linear. As soon as you stretch data - you can't reliably tell if one stars is brighter than the other. This is because signal is additive and stretch is non linear. Data is still stretched if you just converted it to .jpeg from raw in your favorite program because .jpeg uses sRGB color space which has gamma of 2.2 (which is non linear transform). Post two .cr2 files - one from beginning and other from the end of session - for inspection of actual pixel values (both background and star) - so we can see if there is really dimming of the stars and objects. There is one special case where you can have both darker background and darker stars (if that is indeed the case) - dew on primary mirror or front lens. It builds up and blocks the light and everything turns dark, however, it is not gradual process - it usually happens much faster than duration of the session. If you happen to have sudden drop of brightness over say 5-10 subs, then it can be down to dew. In the end, you'll essentially end up with completely dark subs as dew completely blocks the light. Sensor temperature dictates how much dark current there will be. In general - if you take darks at the same exact temperature as your lights, then you should be fine, but that in practice never works because temperature is not constant over night. There is often drop of more than few degrees C. People tend to shoot their darks at the end of the session - but that means that lights from the beginning won't be properly matched. Dedicated astro cameras with set point cooling - deal with this easily as you can always set exact temperature on your sensor. DSLR cameras (modern ones anyway) deal with this in different way. When you take your light exposure - some of the pixels surrounding the sensor (they are part of sensor but not exposed to the light because they have physical mask) record dark current. Their value is then subtracted from the final image. This does not remove hot pixels, but there are other ways to deal with hot pixels. This also means that you don't need dark frames, and sometimes you don't even need bias frames. Bias is needed with DSLR only if there is distinct bias signal. You can check this by taking two sets of bias subs - say 50-100 per batch and stack them and inspect them. If stretched they show the same pattern - then subtract one from another. If in resulting image you find that pattern is gone and you get smooth noise - its worth subtracting bias. If you find that your original bias masters don't have any sort of repeating signal (this will usually be some sort of vertical or horizontal lines or something like that), then you don't really need to remove bias. Here is example of Bias signal from Canon 750D (you can see horizontal and vertical lines as well as darker and lighter regions) Back on the hot pixels. There are several ways to deal with them: 1. Dither and use sigma reject. 2. Use hot pixel map created from master dark 3. Use cosmetic correction algorithm (it identifies single hot/cold pixels and removes them)
  5. This is normal behavior. Most of "brightness" of the sub comes from light pollution. As night progresses - traffic dies down, people go to sleep and turn off their lights - in general light pollution drops down a bit. Another factor - but you'll be judge of that, is that target moves across the sky and you might start your session when target is lower in altitude as it moves towards zenith. If you are avoiding meridian flip, you are likely to shoot your target at one side of meridian only. It can be either climbing towards zenith or setting during session. If it is "rising" - background will get darker because zenith is usually place with the least light pollution (but this is not generally the case and depends on your local LP sources). When background is getting brighter as night progresses - that is usually case of issues with transparency. It can be that target is setting - and as it moves, it goes into area closer to horizon that is brighter due to light pollution, but if that is not the case, then it is local transparency that is getting worse as night progresses. For same level of light pollution - how bright the sky will be, depends on how thick atmosphere is. This is one of the reasons why there is more glow near the horizon, but also the reason why we can "play light sabres" with hand held torches in the fog. With thick atmosphere light scatters around much more. If there are thin high altitude clouds (ones that you can't spot at night but during the day look like haze high above) or there is some general haze in the air - it will brighten your background. Image will look good if there is sufficient SNR - or signal to noise ratio. As name says - SNR depends on two things - signal and noise (and represents their ratio). If on night two, you had poor transparency - it will affect SNR greatly. It affects both signal - by blocking light from the target, and noise - by scattering light pollution and making background brighter. More background signal means more associated noise. It's all about signal to noise ratio.
  6. Yes, stretching tends to desaturate colors. It is non linear transformation that can be be approximated with power law for example. Let's take very nice color that has ratio of 4:2:1 of R:G:B components, but make it very dark by scaling it to be say 0.04, 0.02 and 0.01 in 0-1 range Important thing to note is that red is 4 times as strong as blue and green is two times as strong as blue (and consequently - red is twice as strong as green). Now we can "stretch" our data by applying power law - say raise values to power of 0.1. This is the same as curves that look like this: In any case, resulting triplet will be: 0.04^0.1, 0.02^0.1 and 0.01^0.1 or when we calculate values: ~0.72478, ~0.67624 and ~0.63096 Not only that values are no longer 4:2:1 but they are quite close in value all being in range 0.63 - 0.73. If you take such color - it will be very grey and washed out - desaturated as all three components are now very close in value. In fact - if we look at actual colors (say 0.8, 0.4 and 0.2 and above tripled of 0.72, 0.68 and 0.63) - here is what we get: left is 4:2:1 RGB ratio and right is after stretching or 0.72 : 0.68 : 0.63 ratio (we can see that it is still sort of the same color - but greatly desaturated). If you want to preserve color you have to stretch luminance only and apply calculated RGB ratios for each pixel to final stretched luminance.
  7. I just took your image and software for Windows called IrfanView (very nice image viewer application that can also do basic manipulation like cropping / scaling and such) and down sampled it by x4 and x5 (to 25% and 20% of original size).
  8. What is troubling you? Fact that stars look bloated? Let's see - you used ASI290 and 1500mm of focal length. That combination gives 0.4"/px. In good to average seeing - you are probably going to have to sample at about 1.6-2.0"/px in long exposure, so you are over sampled by factor of x4 to x5. Here is your image at 1.6"/px instead: That looks much better, doesn't it? at 2"/px it looks even better, detail is sharp and stars are point like.
  9. It might be - but let's imagine following scenario - some sort of sigma reject algorithm is used. You can't really incorporate information on sigma reject, but it will work differently depending if you are stacking 1/2 or full set of subs. If one night has different conditions than the other - overall signal values might be different in that night. Transparency change of say 0.1 magnitude can make signal be 10% higher. Similarly - in slight change in LP levels can alter SNR. Variances / standard deviations of two sets and combined set will be different and sigma reject will work differently and produce different results.
  10. As I explained above - simple weighted average does very poor job. Here is an example - imagine you have only 3 different levels of signal in the image - background, outer parts of galaxy and the core. We have two subs that we want to combine weighted. One is shot with 20% higher LP level. Say that for first sub we have: 100e from LP, 10e signal from outer parts of galaxy and 150e from core. Read noise is 2e. In second sub we have 120e from LP, 10e signal from outer parts of galaxy and 150e from core. Again, read noise is 2e. In first sub - (wanted) signals will be 0 for background, 10e for outer parts and 150 for core. Noises will be ~10.2e, ~10.68e and ~15.94e for background, outer parts and core respectively. In second sub, again (wanted) signal will be 0e, 10e, 150e but this time, noises will be ~11.36e, ~11.58e and ~16.55e - again for background, outer parts and core in that order. What weight would you assign to each of those two subs and why?
  11. No you can't (a+b) / (c+d) is not equal to a/c + b/d a/c + b/d = (a*d+b*c) / c*d So first stack which is 30 / 3 and second stack which is 18 / 2 can't be added as 30+18 / (3 + 2) once you have already stacked them. If you stack first stack and get 30/3 = 10 and you stack second stack - and get 18 / 2 = 9 - and give it to someone else - they will just have 10 and 9 and only thing they'll be able to do is - average those two. They will have no idea that you used 3 subs in first stack and 2 subs in second stack (if you don't provide them with that information).
  12. But that is exactly the same as doing the stack of all subs at once so no wonder you get the same result as doing the stack of all the subs at once You added all the values and then divided with number of added values - fact that you put brackets around them does not mean you did different procedure.
  13. Depends on stacking algorithm used and conditions on both nights as well as number of subs produced each night. Here is simple example of how it can be different. If you happen to have different number of subs taken on each night. Imagine you have following set of data 10,10,10, 9, 9 Stacking all of the data (taking average) - will be 48 / 5 = 9.6 Now imagine you took 10, 10, 10 on first night and 9, 9 on second night. Stack of the first night (average) will give you 10, while stack of second night will give you 9. In the end - you average two (10 + 9) / 2 = 9.5 9.6 is not equal to 9.5 Depending on the data - stacking it all together can have different result than stacking each night and then stacking result. In ideal case - having same number of subs with same SNR each and using average stacking method will produce same result if you stack all together vs two separate stacks, but there is no guarantee that SNR of each sub will be the same on two consecutive nights - LP changes, imaging time changes, possibly number of subs changes, seeing changes and transparency changes. Again that will depend on stacking algorithms. Most stacking algorithms are simply not geared towards handling different SNR subs efficiently. Readily available algorithms can be divided into two groups - ones that try to handle SNR differences and those that don't. Ones that try to handle SNR differences - often use single number as weight to determine how much sub should contribute to final stack - however this is sub optimal solution. There is no single SNR value for the image. Here is very simple example - there is galaxy and background. Galaxy contains some signal and hence has non zero SNR. Background does not contain any signal and has 0 SNR. This is of course over simplified - as there is whole range of signal values and associated noises in the image so there is broad range of SNR-s in single image. There is no single weight that will satisfy such wide range of SNRs present in different subs. I don't think there is automatic way to do what you propose - even pixel math afterwards is going to be rather flawed in best combining such data. In theory one could devise method to add two or more such sets of data - depending on what they want to achieve with "addition" One thing that comes to mind is combining R+G+B to form luminance and combine that with L filter. What method of addition is to be used - depends on responses of R, G and B filters themselves and weight for each of components (at each pixel) will depend on standard deviation of samples.
  14. Did you bin in software? You can always process data unbinned to see if you'll get better result. Do you know what sort of FWHM you have in the image?
  15. Yes you can. It is not as comfortable with slow motion controls, but you can observe. I have two alt-az mounts - Az4 and dob mounted 8" scope (which is also type of alt az) - both lack slow motion controls, and I use both for high power planetary viewing (sometimes even up to x300).
  16. That image was taken with QHY5IILc camera which I no longer have. It was taken by utilizing so called - lucky planetary imaging technique. Succession of very short exposures - around 5ms is taken, usually in form of video (but be careful not to use video format that uses compression - ser file format is the best) and then it is processed in appropriate applications - PIPP for pre processing, AS!3 (autostakkert!3) for stacking and I finish the image in Registax with wavelet sharpening and final post processing (color balance and such). If you browse planetary section of SGL - you'll find many threads discussing best "parameters" for lucky imaging. There are a few video tutorials on you tube - just search planetary imaging / lucky imaging.
  17. Making a Alt-Az mount is really not that hard at all. Do internet search for "pipe mount" to get the idea of what is involved (most of them will be EQ type - but difference between Alt Az and EQ is just in how you put it - on a flat surface or inclined plane / wedge). Here is one article on how to build one: https://www.cloudynights.com/articles/cat/articles/how-to/how-to-make-an-alt-az-pipe-mount-r1873 Slow motion controls will be much more difficult to make as you need to get / make worm and worm gear. You can in principle make worm and worm gear with standard bolt and tap for same thread and hand drill - look at this video for example: https://www.youtube.com/watch?v=19jKlq8Ofd4
  18. 150. It appears that 130m has spherical mirror, although, I've done some decent planetary images with said scope: Ideally, you would want 150PL F/8 instead both of those, but I'm guessing you don't have one laying around?
  19. I haven't updated mine, so will need to do it. Above screen shots are taken with earlier versions. In any case - ASCOM was recommended due to default 8bit on native and ability to set to 16 bit in ASCOM - it seems to default to 16bit now for native as well, so there might not be benefit in choosing ASCOM with this version.
  20. I can't find anything related to this in phd2 change log ...
  21. Are you using ASCOM driver or native one, and is there option to set 16bit vs 8bit in version of PHD2 that you are using?
  22. I was not aware of that. If you look at screen shots of peoples PHD2 screens - you'll notice something - SNR is usually around 30-40 maybe 50 range: That is indicative of 8bit camera. With ASCOM drivers - I'm getting much higher SNR - like hundreds (I don't have an image to show) Another indicator is max star value found in star profile - it is usually around 100 or so for most people. I get nice "16 bit" numbers with ASCOM driver: I have no idea where that setting is - maybe latest version of phd2? In version 2.6.5 - it is not there:
  23. For many cameras, native driver is optimized for high FPS and SNR in single exposure is not as good as with ASCOM. Guiding is similar to imaging - you want the best data you can have and ASCOM is preferred method for long exposure imaging. Another thing is that 8bit seems to be sufficient for guiding, but I've found that 16bit gives better results and it is much easier to get 16bit data from ASCOM (I can't choose 8/16 bit with native drivers in PHD2 on my camera). Because level of bias / dark signal depends on gain setting. If one new exact e/ADU values for different gain settings - one could scale one dark to match the gain of another - but it is far simpler to do another set of darks and don't bother with all that gain stuff.
  24. I don't have direct experience but I have seen multiple warnings that PTFE tube in hot end degrades / breaks up / whatever at temps higher than say 245C - which makes it shorter, stiffer and in general creates holes in assembly where filament can get stuck and create clogging. I did read that people say - if you occasionally print high temp material - you'll be ok for year or two before needing to service hot end, but you seem to use that material almost exclusively.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.