Jump to content

vlaiv

Members
  • Posts

    13,265
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. Ok, so here is result of quick / simple process in Gimp (just levels / curves and rotation and crop): And here are the steps. I first loaded image in Gimp and saw that background is a bit bright as is, so my first step was to raise black point a bit: Be careful not to move left (black) slider too much - bring it to the foot of histogram but leave a bit space. We will deal with any residual space later. Second step was to do non linear stretch using levels again. Take middle slide and move it down towards bulk of histogram like this: This non linear part makes that small gap between black point and foot of histogram rather large, so next step was to remove it again: Same thing - we bring black point just short of histogram foot. Now we see that we still don't have enough of signal visible and we again use middle slider and pull it down, after that we again adjust black point: Then I did vertical and horizontal flip to orient the image the way I like it orientated, and did one final curves stretch: When doing curves I create one anchor point below histogram peak. I don't move that point - it serves to leave background as is. I then take another point in the middle of the curve and raise it up until it brightens up detail in the image. That is it. If you look at histogram: It ends before left side - black point is not clipping any of the data - and it shows in the image. I'm now going to try a bit more with this data - namely to remove that background gradient and to bin data a bit since it is not very sharp and it does not need such a large sampling rate. Will post results again.
  2. This is what I thought as well. Wording in that document is a bit confusing.
  3. If you post your linear data, I'll be happy to do basic stretch in Gimp and document the steps for you.
  4. Even with star reduction it is obvious that you are oversampled when zoomed in to 100%. Have you considered binning the data at linear stage - at least x2 but even maybe x3? Even with x3 bin you'll still have "HD compatible" format of about 1770 x 1100 Second objection is very strange effect around stars. It looks like there is "smooth" halo around all bright stars that is rather distracting:
  5. Check out this thread, maybe you don't need anything to shoot wider field shots with your camera:
  6. You quite killed the background / black point. Look at histogram: It is glued to left side and it looks like part of it is missing "outside" of histogram. That is clipping of the black point - not good thing to do in image processing. Try reprocessing your image but this time having nice bell shaped curve in histogram that is almost completely to the left but not touching left side.
  7. I would not do that. It might fix edge stars but it will most certainly render scope non diffraction limited on axis. All correctors / flatteners do that. They were made for long exposure imaging where atmosphere has much greater impact on image sharpness so one can afford not to be diffraction limited. With lucky imaging - well you want to best sharpness instrument can provide. Here - look at F/6.3 reducer paired with C11: In particular - look at "AXIS" part in the bottom middle section. Spot diagrams are larger than airy disk and Strehl is reduced to 0.695 even on axis
  8. May I ask what is your budget and what do you hope to achieve?
  9. I'd reconsider whole ordeal. First, ASI1600 has 3.8µm pixel size and you plan on using Ha filter with it. For that combination ideal F/ratio is F/11.6. You'll be much closer to it without the barlow then with. With barlow you'll be very oversampled. Second problem that I see is off axis aberrations. I don't really know how much coma SCT produces. I think I read somewhere that coma in SCT is comparable to same aperture F/8 newtonian (not sure about that - but let's go with it). Coma in newtonian is function of distance from optical axis and diffraction limited field is given by h = F^3/90 where F is F/ratio and h is distance from optical axis in mm. F/8 newtonian will have diffraction limited field of 5.6888 = ~5.7mm. That will be radius of diffraction limited field. Diameter will be about 11.4mm. Diagonal of ASI1600 is 21mm, this means that only inner 1/4 of the field will be diffraction limited and the rest will be distorted. You can actually see this in SCT DSO images: Center field stars with C9.25 and ASI1600: Top right corner: mind you - this is scaled down to about 1/4 to get meaningful resolution for DSO, true star shape will be: (again - stars are very big due to long exposure and seeing effect - but you can see that coma extends several pixels from the center of the star). In the end, here are the settings I'd recommend: 1. Use ROI of 1920×1680 px at 12 bit capture (16 bit SER) 2. Use 261 gain 3. Limit your exposure to 5-6ms regardless how faint image looks like and what histogram says 4. Do get 10000+ frames - that means 6-7 minute capture per panel (30fps at 1920x1680 at 16 bit) 5. You'll need more panels to cover whole lunar disk, but if you are not capturing full moon - you might need less than number that covers whole disc because there is no point in capturing beyond terminator - it will just be black like surrounding space If you fix your exposure length - and one should, then shot noise is fixed by amount of signal one gets. Changing gain will not change SNR in that regard. Higher gain will just lower read noise so overall SNR per sub will be better.
  10. That depends - where you found those graphs. I'd be inclined to trust one published on QHY website. Other one might be early / beta version or something like that. Alternative is of course to measure camera yourself, but that is somewhat involved.
  11. That is rather easy to do and you can do it yourself. How much difference you'll see depends on your equipment and style of imaging. Largest difference there will be if you have high dynamic range subs and you stack lots of them. Take any of your old images with decent number of stacked subs - like 100+, with something faint in the image - like IFN. Take 32bit linear stack, make a copy and make it 16bit Now you have same data both 32 float point and 16 bit integer. Process 32bit but repeat each step on 16 bit image (in Gimp it is easy as it remembers each stretch or action as "last used preset", guess PS can do the same). At some point, you'll start to see that faintest parts are not the same. 16bit will have more grain.
  12. Take a look at this thread - to see what is possible with ASI178 + FC100DL
  13. It's just about covering the target. With FS60Q it would be like this: Yellow is ASI178 - it can work on native F/10 Cyan is ASI290 - it needs x1.15 barlow to get critical sampling Pink is ASI224 - it needs x1.5 barlow to get critical sampling. You can cover whole moon in just 2 panels with ASI178, while you'd need at least 5-6 with asi290 and probably 8 or more with ASI224. With larger scope - number of panels starts to rise.
  14. How did you figure that out? Two sensors are identical as far as pixel count and pixel size. In any case, it goes like this. Telescope dictates what sort of detail you'll be able to capture - or more precisely aperture size. There is maximum detail that certain aperture can resolve. You need certain pixel size to record that level of detail. That pixel size depends on aperture size and focal length. You can change focal length of scope by using a barlow lens or telecentric lens. In any case, if you want to capture best possible detail - you should match pixel size to F/ratio of the scope (+ any barlow used). You have limited selection of pixel sizes available in planetary cameras: 2.4µm, 2.9µm, 3.75µm and few more. 2.4µm matches F/9.6 for full spectrum 2.9µm matches F/11.6 for full spectrum 3.75µm matches F/15 for full spectrum For Ha light it's a bit different and 2.4µm matches F/7.32 2.9µm matches F/8.85 3.75µm matches F/11.44 General formula is F/ratio = pixel size * 2 / wavelength of light (where full spectrum is usually taken at 500nm and narrowband at particular wavelength). Noise levels for planetary are reflected in two important numbers - QE and read noise - you want first as high as possible and second as low as possible. Once you have all of this - then FOV is just dictated by size of sensor. For Lunar and Solar - one wants larger sensor, while planets require very small sensor.
  15. If you are interested mostly in lunar then having mono sensor makes sense as you can use NB filters with it to get sharper results. What F/ratio are your Taks? As far as I can see that smaller one is F/10 and larger is F/7.4? It would be good to get them both to same or close F/ratio and then match that F/ratio to pixel size. Scopes around F/8-F/10 should aim for 2.4µm pixel size. Sensor size plays a part in lunar imaging as it takes less panels to do full disk image. I think I'd go with this: https://www.firstlightoptics.com/zwo-cameras/zwo-asi178mm-usb-3-mono-camera.html It is a bit over budget. If you absolutely must stay on budget, then this one: https://www.firstlightoptics.com/zwo-cameras/zwo-asi224mc-usb-3-colour-camera.html
  16. Here - look at histogram of 32bit float point version after binning (binning adds fractional parts same as stacking) and after initial stretch: versus same image, same stretch - only 16bit version: Above histogram is "full" with smooth variation numbers and bottom one has distinct few values.
  17. I usually bin in ImageJ where I do some of my linear processing (like background removal and such). Some other free software also supports binning. I think it is better to bin later in processing stage (when the data is still linear) than at imaging time as you loose flexibility to decide if you need to bin and by how much. Your camera is already 16 bit - so each sub you record is 16 bit. When you stack two subs - you create their average. Say you have number 7 in one pixel in first sub and number 6 in same pixel in second sub. Their average is clearly 6.5 - which you can write in float point precision - but need to round up or down in 16bit integer precision (as it can store only whole numbers). There you go - we just introduced error of 0.5 by rounding in above example - error that we don't need to introduce if we use 32bit float point format. In fact, you can stack hundreds of subs in 32bit float without error - with even full 16 bit cameras, and if you stack even more than that - error that you introduce is exceptionally tiny due to nature of float point precision (floating point thing makes sure error is tiny). If you don't use 32bit - you'll struggle to get smooth faint areas after stretch - imagine you have pixel value of 3 and pixel value of 4 and nothing in between - that is just two shades, but in float point you can have 3, 3.001, 3.002,..... up to 3.999 (in fact not only 3 digits - but more than ten digits) of shades. This means that you'll have smooth transition after stretch rather than posterized in that value range.
  18. I quickly did process of that no filter version and here is what I came up with: Now, I did not mess with saturation and all that - just basic levels and some noise control. This was done in Gimp. My recommendation would be: 1. Use 32bit float point 2. Think of binning the data as it is oversampled. 3. Apply careful noise reduction
  19. Well, it is 2022 - time to switch to software that supports 32bit float point images. Gimp does it. DSS can export 32bit float point data without problems.
  20. Well, for start - use 32bit floating point format instead of 16 bit format with camera that is already 16 bit. You are truncating a lot of quality data this way.
  21. Yes it is, I was hoping it is 2" version and that you can put it in front of reducer.
  22. There seems to be, but I'm not sure how can this help you get proper calibration though. What was the distance between TV and scope and were there other sources of light in the room? Never mind that - I just saw your post where you calibrated with darks only - there is still very uneven background in that image. What is your imaging train like? Describe places of components, will you? In particular, if L-eXtreme is placed before or after reducer. What size is the filter? 1.25" or 2"? I'd try following - placing filter in front of reducer so it is in slower beam - F/6 and maybe combining it with UV/IR cut filter just in case to see if it makes difference.
  23. There are two possible things. 1. orthogonality. I'm really not sure how to handle that because it is deeper mount issue and it means that angle between DEC and RA is not 90°. As far as I know there is no way to adjust that. 2. Cone error You can fix cone error yourself by adjusting scope on dovetail. Depending on type of scope and dovetail - you might already have what it takes. For example Skywatcher dovetail comes with four bolts: which you can use to tilt the scope and fix cone error You can use software to help you with that, but you can also use simple method - pre and post meridian centering. https://www.cloudynights.com/articles/cat/articles/a-simple-cone-error-correction-procedure-r2447
  24. Hi and welcome to SGL. I see two things - first is that stars are bluish and second is that channels are not fully aligned properly. Given that this is false color image - made out of narrowband data - stars simply won't have proper color. You are taking recording of very specific wavelengths and assigning them to whole part of spectrum (when doing RGB compose). If you look at iconic image - Pillars of creation, by Hubble - you'll see very strange star color as well: Pink stars? That does not happen. Stars lie on Plankian locus and have one of following colors: If you want to try to fix channel shift - I suggest that you register channels against one (say OIII and SII against Ha). That should align data between channels.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.