Jump to content

sgl_imaging_challenge_2021_annual.thumb.jpg.3fc34f695a81b16210333189a3162ac7.jpg

jager945

Members
  • Content Count

    79
  • Joined

  • Last visited

Everything posted by jager945

  1. Your latest dataset is a massive improvement! There are some small remaining issues, but they mostly have to do with the stacker. These issues are channel misalignment (causing colored fringing) and aberrant chromatic information in the highlights (over exposed areas). You can see these issues here; There is much I can say about some of the grave misconceptions above, but hopefully the most obvious, thing to anyone, is that space is not brown - not at any chosen white reference - and that stars are not colourless, nor are light-reflecting objects. As mentioned (but not dem
  2. Good to hear you're back on track. @AbsolutelyN brings up a good point; the L-Pro filter will block some light pollution, which will help record targets with specific emissions that are not in the same part of the filtered spectrum (Ha, Hb, O-III, etc.). However galaxies in particular, emit at all wavelengths so you are definitely filtering out some "useful" light. Whether that is worth it vs blocking the unwanted light from light pollution is dependent on the situation of course. Spectrum response at the Optolong site; https://www.optolong.com/uploads/images/20191111/8b905545f314b5
  3. Hi, I had a quick look at your dataset, but it there are several pre-processing and acquisition issues that really need to be addressed. If using StarTools, please use the recommended settings for DSS. The dataset has been drizzled for no good reason There is severe pattern/correlated noise visible (probably due to drizzling) The dataset was not calibrated with flats - they are really not optional. The dataset was color balanced. Particularly the lack of flats will keep you from progressing. For recommended DSS settings for use with StarTools, see he
  4. To both of you, please feel free to share a stacked typical night's worth of data with me (here, via the ST forums, PM, whatever works for you). Along with that, a StarTools rendition that you're not happy with and - optionally - some other image that you do like. I'd be happy to give you any pointers. Often times, things are trickier if your data is not clean. The assumption is that the only noise in your image is shot noise (from the signal) and nothing else. That's when ST can really stretch its legs and the engine can properly prove its worth. Something close to this ideal (being shot
  5. This the third time you are misquoting what I said. I really doesn't help this back-and-forth if you keep doing that. Where did I ever claim there are no illuminants in outer space? Every single star is an illuminant. Emissions can be illuminants. Anything that emits visual spectrum light can act as an illuminant for a scene if that light is reflected. What is this "defined" standard? You keep conflating target medium illuminant (D65 in the case of sRGB) with the source colour space illuminant and transform - for which there is no "defined standard". They are two different things,
  6. Indeed. But... and I think we found the problem. You seem to be under the mistaken impression there is only one illuminant in the story. There are - at the very least - always two (and almost always more). There is at least one illuminant (and associated white point/power spectrum) for the scene, whether it is the sun, moon, indoor lighting, etc. There is at least one illuminant (and associated white poin/power spectrumt) associated with your display medium. Fun fact, in the case of the sRGB standard, there are actually two more illuminants involved; the encoding
  7. I'm not sure if you are joking here by playing word games? The alternative is that you really don't understand (or even acknowledge) the massive difference between the partially reflected light from an illuminant, its power spectrum qualities, and, say a direct monochromatic light source. Let's try a thought experiment; I go into a large empty hall, somewhat dimly lit by 100+ light fixtures with old incandescent bulbs powered at unknown different voltages. Some are brighter (and thus whiter/yellower), some are dimmer (and thus redder). In the room, is a rare rainbow-coloured bird-of-para
  8. Oof... my head hurts from the Gish gallop and contradictions, again without actually perusing the information shared with you. If we are exclusively dealing with emissive cases (we are not), then why on earth are you calibrating against an exclusively reflective target like a Macbeth chart, purely relying on subtractive colour, as lit by a G2V type star at noon under an overcast sky (e.g. the D65 standard illuminant as used by sRGB)? None of your calibration targets here are emissive! Also, you are flat out wrong that the illuminant is irrelevant. Not only is its influence explai
  9. Mate, the fact that you commented within minutes on a massive long dissertation about color space conversions means you are either an amazing speedreader of scientific literature, or more interested in "being right" than actually truly furthering your understanding of the subject matter. The comprehensive color space conversion research page is literally titled "Camera to CIE XYZ conversions: an ill-posed problem". It literally goes into all the different subjective decision that need to be made when constructing a solution to the matrix. I cannot fathom how you can maintain construction
  10. Then let's add some slightly more relevant links as well; https://jakubmarian.com/difference-between-violet-and-purple/ (contradicting the incorrect assertion that "in fact color is light of a certain spectrum") http://www.cvc.uab.es/color_calibration/SpaceConv.htm (contradicting the incorrect assertion that "every manufacturer could produce RAW to XYZ color transform matrix as that absolutely does not depend on any conditions") If anyone is interested, in good faith, in more information or further explanations, do let me know.
  11. I'm afraid none of that is true. I recall a similar exchange with you some time ago in which I urged you to study up on this and gave you the sources to do so. It saddens me you have not taken that opportunity.
  12. A green-dominant OSC/DSLR dataset tends to be a good sign. All you need to do is establish and subtract - in the linear domain - the bias for each channel (indeed ABE/DBE or Wipe will do this). Then you color balance - again in the linear domain - by multiplying the individual channels by a single factor per channel. You will now get expected colouring (in the linear domain). Macbeth charts and/or calibration against them are irrelevant to astrophotography where objects are mostly emissive and cover a huge (virtually infinite) dynamic range. Nothing is lit by a uniform light source at a
  13. Indeed, if StarTools is not playing ball with your datasets, then feel free to share one with me (Dropbox, Google Drive, WeTransfer, OneDrive, etc.). If StarTools needs lots of babysitting, then almost always, the problem is an acquisition issue (bad flats, not dithering, incorrect stacker settings, not linear/virgin anymore etc.). You can find a fairly comprehensive list here. When evaluating post-processing software, the worst mistake you can make, is judging software on its ability to hide issues. Hiding issues is not what post-processing is about . If that's mostly what you are d
  14. Hi, ISO in the digital era is a rather confusing subject, particularly if you come from a daylight/terrestrial background. For earth-based photography you can use the triangle rule-of-thumb, but once you get into low light photography where every photon counts, this falls apart rather quickly. The "problem" is that you are dealing with one digital sensor that only has one, fixed sensitivity. To emulate other sensitivities, signal is either thrown away or signal is artificially boosted. Both of these operations are detrimental to your final signal. For this reason, in astrop
  15. Hi, I think your data acquisition efforts and precious time spent under the night skies are let down by not calibrating with flat frames. They are really not optional; they would calibrate out the dust smudges and severe vignetting. Right now any signal is mired in uneven lighting, making it impossible for people (and algorithms) to discern artifact from faint signal. Have a look here for some important do's and don'ts when acquiring and preparing your dataset for post-processing.
  16. Absolutely! Sounds like fun. Do let me know what the procedure, preferred format and times, etc. are.
  17. This is fantastic data as always, Grant! The growing body of IKO datasets are part of my favourite testing datasets, as they are good and clean, but still have tiny flaws that come with the "mere mortal" nature of the equipment and terrestrial location. This one in particular, is a great dataset to learn a rather specific compositing situation with; adding Ha detail to an object that mostly emits in other wavelengths. I made a post on the ST forums on combining the Ha and visual spectrum data for this specific case. I hope OP and the mods are OK with linking to it here (given I'm a v
  18. A "circular gradient" sounds like a flats issue; vignetting and unwanted light gradients are rather different things with different origins (uneven lighting vs unwanted added light) and have different algorithmic solutions (subtraction vs division). Basic sample-setting algorithms for gradient modelling and removal as found in APP/PI/Siril are very crude tools that, while getting you results quickly, make it very easy to destroy faint detail (e.g. think faint IFN around M81/M82 etc.). In essence, they ask you to guess what is "background" in various places. Unfortunately it is impo
  19. Congrats on a fine first SHO image! Purple stars are extremely common due to the low emissions in the Ha band of stars. Some people find the purple objectionable (I can personally take them or leave them ). If you are one of those people though, there is a simple trick in ST to get rid of the purple; With Tracking off, launch the Layer module. For Layer mode, choose 'Invert foreground'. Notice how the inverted image now shows those purple stars in green. The Color module has a Cap Green function. Set this to 100%. Launch the Layer module again, once again choosing 'Invert foregr
  20. Honestly, something like a used RX 570 also offers very good value for money (though since it's an AMD offering, it will have no CUDA cores). A bit more expensive (as it's a general purpose GPU), but quite a bit more powerful for compute purposes. There are mining versions of these around as well.
  21. Wow, that's quite the difference! It seems your CPU is very much bottlenecking your system here. If it truly is 10 years old, then that would make it a 1st generation i5 on an LGA1156 socket. Depending on your system, an ultra-cheap upgrade would be an X3440 (or X3450, X3460 or X3470) and overclocking that (if motherboard allows). This should give you 4 cores, 8 threads at higher clockspeeds.
  22. You will probably find the difference is actually even more dramatic when comparing the 1.7 non-GPU and GPU versions like-for-like (both versions are included in the download). The 1.7 version, particularly in modules like Decon, uses defaults and settings that are more strenuous, precisely because we now have more grunt at our disposal. For example the amount of default iterations has been bumped up, while Tracking propagation is now fixed to the "During Regularization" setting (e.g. this setting has been removed and is always "on" in 1.7), which back and forward-propagates deconvolution's ef
  23. Great to hear you were able to produce a good image so quickly! Most modules that require a star mask now show a prompt that can generate one for you. Did this not work well/reliably for you? This is (very) likely an artifact of your monitoring application. Your GPU is definitely 100% loaded when it is working on your dataset; As opposed to video rendering or gaming, GPU usage in image processing tends to happens in short, intense bursts; during most routines the CPU is still being used for a lot of things that GPUs are really bad at. Only tasks/stages that; can be
  24. Actually, most applications do not make use of multiple cards in your system (ST doesn't). That's because farming out different tasks to multiple cards is a headache and can cause significant overhead in itself (all cards need to receive their own copy of the "problem" to work on from the CPU). It may be worth investigating for some specific problems/algorithms, but generally, things don't scale that well across different cards.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.