Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

LRGB filters


Spongey

Recommended Posts

44 minutes ago, Spongey said:

Considering that my main goal with imaging, ultimately, is to make pretty pictures, I have decided to go with the Baaders.

Their reputation within the community, together with the SNR gain caused by the extended gaps present in the Antlia bandpasses were the deciding factors for me.

While the colour of my photos might not be exactly as one would see them with the naked eye, I'm sure they will be good enough for me :)

I don't think you made error and prompted by this discussion I set myself a goal of actually calculating potential color gamut for this among other combinations.

We are yet to see how much inferior if at all this combination is in contrast to OSC cameras.

Hopefully, I'll finish that soon and post results ...

  • Like 3
Link to comment
Share on other sites

Here are some results. I'll have to do multiple posts as there is lot of data ... :D

Results are quite unexpected - or maybe it is just matter of interpretation. I think I need to find better way to visualize the differences between cameras.

Let me first do some intro:

I found curves for bunch of things: ASI1600, ASI185, ASI178, Baader filters, ZWO glass covers (both UV/IR and AR) and combine them into meaningful profiles. For ASI1600 I used Buil's measurement rather than published data (although I created both profiles) since I've seen his measurement of Baader filters and they match Baader published curves - so I don't have reason to suspect that ASI1600 is wrong. However, I do have reason to suspect published sensor curves are correct - no way of verifying that information. For that reason - this should be considered theoretical rather than actual result for each of these cameras. If I manage to confirm results by different means - then we will be in position to treat results as actual.

Let me start by doing QE graphs of "contestants".

ASI178mc + ZWO UV/IR coated window glass:

image.png.9b4a42ae8f9765a0f9faa94f5a6837de.png

ASI185 + ZWO AR coated window + Baader IR/UV cut filter (that is how I'm going to measure real camera later with different methods for comparison):

image.png.9eddce6c2f9cfe71bf38e8240771c622.png

ASI1600 Buil + ZWO AR coated window + Baader R, G and B interference ccd filters:

image.png.c62f3ff4679a9d97c1ff181bdd4dab57.png

I then created python script that will generate 1,000,000 random spectra in 360-830nm range and calculate both XYZ (using standard color matching curves for 2° observer) and Raw response from each camera.

Then I used least squares method to solve for transform matrix. Just to be sure that solution is stable enough (we are using random set of spectra each time and due to minimizing error - there will be variance in resulting matrices), I ran it 3 times on ASI178 model. Here are results:

First:
0.710320557517913,0.38596304011600513,-0.18196468723815176
0.1691364171282709,1.0066983090786776,-0.5077560059357316
0.05762047461274392,-0.16837359062685783,1.4747700892512243

Second:
0.7106530004897218,0.3857756561317365,-0.18198339828143478
0.16918327451701587,1.006894913610129,-0.5081409335605773
0.05718726281193481,-0.1686434711209549,1.4756586916969725

Third:
0.7103502553581972,0.38568029007752436,-0.1815377132161584
0.16911938483565908,1.0066373572193015,-0.5076413483013693
0.057729996373306496,-0.1681957338241362,1.47435009985627

These differ on third decimal place so we could say they are stable enough.

Resulting Raw to XYZ transform matrices:

ASI178:

0.710320557517913,0.38596304011600513,-0.18196468723815176
0.1691364171282709,1.0066983090786776,-0.5077560059357316
0.05762047461274392,-0.16837359062685783,1.4747700892512243

ASI185:

0.6687070598566506,0.43457328602775613,-0.34725651432413374
0.1774728749070924,1.055721622412892,-0.7360695789108003
0.1847243138023235,-0.26623470123526377,1.7405880294984606

ASI1600:

0.6135793545429793,0.3021158351418402,0.186178032627276
0.2678263945071415,0.7665371985113655,0.075145287086023
-0.0616657224686888,-0.16815021009453934,0.9726970661590604

(sorry about number of digits - it does not make much sense to include that many digits since precision is lower, but I just went copy/paste).

Next we will examine gamut of each camera.

  • Like 1
Link to comment
Share on other sites

18 minutes ago, Martin Meredith said:

Looking forward to seeing the rest of the results. (Love the legends 🙂)

Quick question: what underlying distribution are you assuming for your random spectra? Uniform?

Martin

 

 

Yep, just uniform 0-1 over the range of 360-830nm. In fact - both are sampled at 5nm intervals (spectral data and matching functions)

I'm aware that this type of matrix generation might not be the best, so I'm also going to present later on findings on two different method and propose third method - but I'm willing to take suggestions on best data set.

Second method is pure spectral locus (I just use pure spectrum lines as spectra), and third is "weighted" method - we assign 3 weights to:

- random spectra

- Plankian locus spectra (black body curves in certain range say 2000-50000K)

- spectral locus

If this method is to be used - we really should somehow model colors that we are likely to encounter in astrophotography.

Also, least squares is easiest to implement - but may not be the best metric - delta E might be better - however, I'm not sure if I know how to generate matrix that minimizes delta E for given set of spectra and XYZ and Raw profiles.

If anyone knows optimization algorithm that I can use to transform set of vectors to different set of vectors using custom "distance" function - that would be excellent.

Link to comment
Share on other sites

If you're using Python I imagine you're using SciPy? There are various functions there but this one allows different loss functions (so not exactly custom distance functions, but at least some variations on least squares):

 https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.least_squares.html#scipy.optimize.least_squares

Have you checked out various libraries of stellar spectra to use instead of a uniform distribution (these could be weighted by frequency of occurrence) e.g. http://www.phys.unm.edu/~tw/spectra/mk.html

No doubt vizier has some to download.

  • Like 1
Link to comment
Share on other sites

Now, onto gamut.

First - let's explain what these (very strange looking) gamut graphs mean. Each graph will contain 3 things:

1. Spectral locus in XYZ color space

This is set of points where all pure spectral lines land in xy chromaticity diagram. Btw, xy chormaticity is 2d space that deals with color exclusively and not brightness. It represents all colors that humans can see in terms of only two numbers - x and y, without any particular brightness assigned to them.

2. sRGB gamut - that is subset of all colors that we can see - one that can be displayed on sRGB calibrated computer / phone screen. Not all colors that we can see can be displayed on computer screens - simply because computer screens can't show those most saturated colors.

3. gamut of Raw to XYZ transformed spectral locus

It is very important to understand what this represents - that is set of color values that camera can produce. This is not saying anything about whether these colors are accurate - we need another metric - it is just set of numbers that can be result of camera taking an image.

If color falls outside of camera gamut - camera will never produce that color in image

if color falls inside of camera gamut - camera can produce image containing that color - however, there is no guarantee (from this graph) that this particular color will be accurate

Another note - colors that fall outside of sRGB gamut will certainly be displayed incorrectly - since sRGB monitor can't display original color - but closest color will be used instead (this is science in itself - what is "closest" color).

Quick reminder - if you have not seen xy chromaticity diagram, here is one:

image.png.52f5977133f74426496ac098bff34f5a.png

This is good diagram. First - horse shoe that contains all colors is spectral locus. Bright triangle is sRGB gamut - and these are accurate colors. Rest of the colors are darker to represent the fact that color that you are seeing in that place - is not true color that belongs in that place - color that you see is "best effort" by you computer screen to show actual color in that spot.

Greens that are at the top of horse shoe are simply more saturated greens than display can show ....

Note that all colors that can be made with certain combination of lights - lie inside shape made by connecting the dots - R, G and B of sRGB color space form triangle and all colors that can be generated by those three colors - lie inside that triangle. Similarly all colors lie inside horse shoe since all colors are product of pure spectral colors in some ratio.

Now onto gamuts ...

ASI178:

image.png.b3c32361e904270ded1e52b31cb47526.png

We can see a few things from this graph:

- most blues fall inside gamut of this camera

- most saturated greens actually fall outside of gamut of this camera (btw, when I say camera - it is actually camera + particular RAW to XYZ matrix  - different matrix will have slightly different gamut)

- camera also can't produce pure spectral reds (this really means that if camera is imaging Ha for example - it will produce color that is not pure deep saturated red - but rather somewhat unsaturated red color instead)

- camera produces most of the sRGB gamut - only pure red of sRGB can't be produced. What ever image you record with this camera it will never have 1,0,0 color in it (in sRGB space)

- IR colors will produce most weird yellowish orange hues :D

ASI185:

This one is even weirder :D I suspect that this is because camera has strong red response.

image.png.37518dc8ad29b505b22e26ec4fbc98c5.png

Pretty much similar to above ASI178 except:

- gamut on red side is even smaller

- pure IR wavelengths will produce from yellow to bluish

- even more of sRGB gamut is clipped close to Red vertex

ASI1600 - this one is biggest surprise to me:

image.png.91c033409560b21ab00e4df5b665cfcb.png

It indeed has the smallest gamut of the three - it clips greens, reds and blues - but interestingly enough - it will cover whole sRGB gamut (note that we look at convex area that encloses all data points for ASI1600 spectral locus)

Another thing that can be seen is that pure spectral colors are the most uneven on the "curve" (which is not curve but more resembling triangle) - and because of different distances between points - we can see "the banding" in rainbow colors I was talking about.

Important thing to remember is that these are only matrices generated with uniform random spectra - might not be, and probably is not the best data set. It also does not use best metric with respect to human vision - it should minimize distance in perceptually uniform space rather than in XYZ space - so that converted colors are most "like" ones they should represent.

In the end, let's see how matrix choice (or rather spectra used for matrix generation choice) changes gamut:

image.png.69013d9c7c5ea6ba3f3111e30dcc1572.png

Here we have ASI178 + random matrix gamut vs ASI178 + spectral matrix gamut.

Second matrix was generated with only 94 different spectra - single spectral line in 360-830nm range in 5nm steps. Maybe second spectral matrix gamut is slightly larger - but it also loses some coverage around 400nm in blue part of spectrum. Reds are somewhat recovered and I think that overall error is smaller - respective locus points seem to be closer to original XYZ spectral locus.

Hope you found this interesting - next thing would be to optimize matrix transform in some way  - by selecting good spectral data set for generation - and may changing algorithm for generation of matrix.

Another interesting topic is - how accurate will resulting colors be - over whole space, and in particularly - in sRGB space - that will answer question: How different would image on computer screen be taken with particular camera vs ideal camera.

We want to know if using mono camera with filters is somehow inferior in accuracy of produced image when viewed on computer screen. Maybe do the same with wider gamut space - since computer screens are likely to go wider gamut and unlikely to go smaller in gamut - in the future.

  • Like 2
Link to comment
Share on other sites

30 minutes ago, Martin Meredith said:

Have you checked out various libraries of stellar spectra to use instead of a uniform distribution (these could be weighted by frequency of occurrence) e.g. http://www.phys.unm.edu/~tw/spectra/mk.html

I'm having trouble accessing most of those links (at least 3-4 that I tried seem not to work any more) - but you are right, I have considered calculating actual spectral xy colors for most stars - to be used for further color calibration / adjustment

I found this (working) catalog:

https://www.eso.org/sci/facilities/paranal/decommissioned/isaac/tools/lib.html

I guess weights should be assigned by relative luminosity and abundance.

We should really include single spectral lines for astrophotography calibration since many of object recorded are emission type and shine in few dominant single spectral lines (like Ha, SII, OIII, Hb, NII, ....)

Regular daytime calibration of DSLRs only uses handful of colors (color checker passport illuminated with D50), and I actually have curves for them. @sharkmelley pointed out these for me in another thread where we discussed something similar, but unfortunately could not reach agreement on our understanding of that particular topic.

https://www.babelcolor.com/index_htm_files/ColorChecker_RGB_and_spectra.zip

 

Link to comment
Share on other sites

This is a good modern spectral library Miles it and selected Calspec (HST) and Pickles (calculated) spectra and other are in Buil's ISIS database here  ISIS some in Fits and some in .dat format. 

Regards Andrew

PS The Miles spectra in the link have been dereddened  those in ISIS have not and are how we would see them.

Edited by andrew s
  • Like 1
Link to comment
Share on other sites

Just a small update.

I've found old color calibration of my ASI178 - that I know is a bit wrong.

I downloaded color checker passport image on my mobile phone and used it to do color calibration. I suspect that this is wrong since my phone is giving somewhat bluish image and in DSLR measurement (assuming my DSLR is properly calibrated and produces good XYZ values) - it gives white point of about 7000K which is higher than expected 6500K for sRGB.

In any case, I wanted to see what would gamut look with this transform versus derived transform:

image.png.5539034e06c12de157a42e74f2a844db.png

Red line represents gamut of 178 artificial profile + measured transform matrix and yellow line represents artificial 178 profile + spectral derived transform matrix.

Measured matrix does not look bad at all - I think that I'll try to derive matrix from two different set of data:

Plankian set in some range of temperatures - like 2000K - 50000K and also those distributions that I linked above - spectra of color checker + D50

Link to comment
Share on other sites

Great posts Vlaiv!

I have a question: how much (if at all) does the QE response of the camera affect the resultant gamut output?

I would assume none, as a different scaling transform could simply be applied based on the response of the sensor, and this would only affect the brightness of the colours, not their gamut.

I appreciate the AR window on the camera affects the response, so is the resultant gamut output only a function of the filters used and the AR window?

Edited by Spongey
Link to comment
Share on other sites

1 minute ago, Spongey said:

I have a question: how much (if at all) does the QE response of the camera affect the resultant gamut output?

Depends on actual QE.

Imagine for example that you have camera that only captures light in 500-600nm range and has QE of 0% otherwise - no filtering can make camera sensitive where it's not sensitive. In that sense - it can affect gamut greatly.

3 minutes ago, Spongey said:

I would assume none, as a different scaling transform could simply be applied based on the response of the sensor, and this would only affect the intensities of the colours, not their gamut.

Gamut is tricky thing to understand. It's not about if color can be registered - it's about if it can be 'resolved'.

Gamut less than full gamut does not mean that camera will not collect photons and register some values. Camera always collects some photons and registers some values. Question is - can you distinguish them as different colors once camera registers them.

Mono sensor without filters has narrowest possible gamut - it is single point on xy chromaticity diagram (does not matter which point - that depends on your selected matrix) - whole image will be monochromatic - only shades of single color - which ever color you choose that to be (you can have shades of red, or shades of green or shades of grey - all these are single points in xy diagram).

 

Link to comment
Share on other sites

Btw, I just realized I've made large error in above calculations. I was not paying attention to units.

Not sure if that is going to impact results - but I know that it makes uniform random distribution be non uniform or "skewed".

XYZ coordinate space is defined as integration of spectral radiance which is energy (per unit time) rather than photon count which is captured with camera.

I used spectral radiance for both - when I should have used photon count for camera raw values and radiance for XYZ.

Difference is obviously photon energy at certain level given with expression:

image.png.0bb8873fe6326dc58af875f02c8c7df4.png

In another words - I must multiply camera data with lambda. Will now check if it makes difference and how much (it makes difference in resulting matrix, but not sure about gamut).

Link to comment
Share on other sites

3 minutes ago, vlaiv said:

Imagine for example that you have camera that only captures light in 500-600nm range and has QE of 0% otherwise - no filtering can make camera sensitive where it's not sensitive. In that sense - it can affect gamut greatly.

That makes sense, of course. I was more referring to those sensors we typically use in astronomy (that would generally have <40% QE between 400-700nm).

I think I understand your other point. My question, I suppose, is that given a particular optical train, e.g. Baader RGB filters and ZWO AR window, how will changing the sensor behind it (e.g. mono ASI1600 vs. mono ASI2600) affect the resulting colour gamut, if at all? 

Link to comment
Share on other sites

Just now, Spongey said:

I think I understand your other point. My question, I suppose, is that given a particular optical train, e.g. Baader RGB filters and ZWO AR window, how will changing the sensor behind it (e.g. mono ASI1600 vs. mono ASI2600) affect the resulting colour gamut, if at all? 

I think there will be only slight difference. We can "test" that as well - as I have two different profiles for ASI1600 - not sure which one is accurate - if any, but one is published and another is measured.

I'll run matrix generation on both (as soon as I fix things with photon count mess-up) and we can compare results.

  • Thanks 1
Link to comment
Share on other sites

I'm a great fan of the Baader LRGB set.

I take Vlaiv's earlier point about achieving the teal blue of OIII in the RGB set but, in reality, aren't you going to combine dedicated narrowband OIII with your LRGB image, just as you are likely to want to combine Ha? In this case the OIII will completely overwhelm the teal blues of the RGB layer and the colour they end up contributing will be decided by the way you include the OIII. There is a simple way to add OIII to (L)RGB while retaining full control of its green-blue balance. (Using Ps or another layers program.)

-  Take a copy of (L)RGB and apply the OIII to green in blend mode lighten, at full strength, and save as OIII to green.  Now apply OIII to blue in a second copy of (L)RGB and save as OIII to blue.

-  Stack the three images you now have as below in Layers:

OIII to blue

OIII to green

(L)RGB

Now you can use the opacity sliders to decide how weight the top two layers relative to each other and then how fully to apply them to the original.  This will make the teal blues available to you and, if you have the original open as a guide, you can replicate its colour fairly closely in the OIII-enhanced version. In reality I'd not be using the (L)RGB for this but the HaLRGB but I'm sticking to the teal blue issue here.

An example of HaOIIILRGB done this way:

585104505_HaOIIILRGB22HrsCrop.jpg.e53c62cefc69f7cab13ab27c7dcfc115.jpg

Olly

 

  • Like 1
Link to comment
Share on other sites

Well, yes, this changes things.

Here is spectral based matrix for ASI178 with old and new code:

image.png.c5111fd4052fa1cc9affdffe8deb9245.png

To clarify again what I did here - under old code I assumed that XYZ space uses energy / power density over spectrum - which is right, and that camera records again energy / power over spectrum - which is wrong - camera records photon count.

In order to convert from energy to photon count - one just needs to divide total energy at certain wavelength with energy of a single photon at that wavelength.

4 hours ago, Spongey said:

I think I understand your other point. My question, I suppose, is that given a particular optical train, e.g. Baader RGB filters and ZWO AR window, how will changing the sensor behind it (e.g. mono ASI1600 vs. mono ASI2600) affect the resulting colour gamut, if at all? 

Now that we have corrected method - we can compare gamuts of published ASI1600 and measured ASI1600 profiles - that vary slightly.

image.png.313961efe09ed2b1746a7f93b649ea4b.png

Well, surprise, surprise - for matrix generated by single line spectral samples, gamut is absolutely the same (graphs are overlapping so only one is seen). That sort of makes sense - as QE determines only intensity of single line spectrum - not relative ratios - as there are only single lines. Intensity of light does not change its color - so both cameras record same values in terms of "color" but at different intensities.

If we make matrix out of spectra containing mode than one line - things should be different.

image.png.6482384781e260a9c4b9c30603736167.png

Indeed - there is subtle difference between the two.

I'm now going to try color conversion matrices created with black body radiation curves and combined ones (spectral lines + black body curves)

  • Like 1
Link to comment
Share on other sites

Going back to the original question, the RGB filter sets with a gap at the sodium wavelength would have caused an interesting problem for imaging comet Neowise in 2020.  If you remember, Neowise had orange in its tail caused by sodium ions.  A "gapped" RGB filter set would actually record nothing for this part of Neowise's tail.  In that sense the gamut of a sensor makes sense i.e. the gamut is the range of colours that can be recorded.  Some colours such as the sodium wavelength in the above example  cannot be recorded at all.

For RGB filter sets (or OSC cameras) with overlapping transmission bands, all colours can be recorded so the filter set is full gamut in that sense.  However the problem is that many different colours will be recorded with the same RGB ratios - a good example is imaging a rainbow with sharp cutoff RGB filters.  The result will be three distinct bands of red green and blue as @vlaiv mentions earlier. The problem here is the inability to distinguish between distinct colours  i.e. many different colours give exactly the same RGB values.  This is the concept of "metameric failure" i.e. the inability of the RGB filters to distinguish between colours that the human eye sees as being different.

Mark

  • Like 3
Link to comment
Share on other sites

I've been doing some more work on this and have come to realize that we really need different method of assessing effectiveness of sensor / sensor + filters combination.

Gamut / reproduction range is really not the metric that we are after - at least I think so.

I think that we need a way to asses how precise / correct reproduced colors are.

Following advice from @Martin Meredith - I checked out general solver in Scipy and added feature to derive Raw to XYZ transform matrix where distance will be calculated not in XYZ space but as deltaE*(76) (simple version from 1976 for now).

Resulting matrices are quite different - here are spectral loci transformed with each:

image.png.1d0fdc4d87e7435fba01d860459567b3.png

Blue is regular XYZ plot of single line spectra in 360-830nm range. Red graph is raw to xyz for ASI178 derived with deltaE approach, while yellow graph is same camera - derived with regular norm - distance in XYZ.

For matrix derivation mixed data set is used in both cases - single line spectra 360-830nm range, every 5nm and black body curves from 2000K to 50000K, every 510K - that gives equal number of each - 94, so total of 188 samples for matrix solving.

I think that next step should be trying to asses how accurate are produced colors - that is more important. I think I'll start with again random spectra and calculating XYZ, raw to XYZ and deltaE for each spectrum and then "binning" deltaE values over XY diagram - like average deltaE at each 0.01x0.01 square (to have 10,000 bins).

Link to comment
Share on other sites

Small update - this is bad :D

result.png.5dd06c18ce8212ccc6f0948538092fe4.png

This is xy plot of deltaE from 10000 random spectra. Problem is - random spectra are not quite random in terms of color. They take up very small region of xy diagram.

I guess, that next would be way of generating spectrum from x,y chromaticity color. That is not well defined problem (many spectra end up having same x,y coordinates), but I think I'll think of something clever.

Link to comment
Share on other sites

Well, that was easy fix :D

It turns out that I just needed to think a bit of why above was happening. XYZ space was designed to be "equi-energy" color space - meaning spectrum that is "flat" - or has equal energy across wavelengths should land on 1,1,1 in XYZ (or equal X, Y and Z values) - or in xy chromaticity diagram at coordinates 1/3, 1/3.

Random with uniform distribution tends to produce roughly equal energy spectra - finer the wavelength scale - more equal energy spectrum is.

We can deal with this in two ways - go coarser (but that also means smoother), or use different distribution. So I thought a bit - what sort of distribution do we want - well, I thought like this - single spectral lines form locus and 1/3, 1/3 sort of forms center of that locus - and we need spectra that is spread from center to edges.

In center we have uniform distribution and on edges we have single lines, so I said - why not raise 0-1 uniform distribution to some power. That will tend to create more single "spikes" and less uniformity as smaller numbers will be preferred over large ones - but that is "scaled" according to largest sample (sort of). In any case, here is 0-1 to the power of 16:

result.png.233d8e59be3ea8e382dbedc3565fa01f.png

Much better looking. Color here represents deltaE, but I need to properly "calibrate" it. It does not represent density of samples! It would be good to have one image representing density of samples as well - just to make sure it is anywhere near uniform over xy diagram.

 

  • Like 1
Link to comment
Share on other sites

2 hours ago, sharkmelley said:

I like your idea of plotting deltaE on the xy chromaticity diagram. 

Mark

result.png.ad3e7a3860b7e95f0aec11a6b8cdeb80.png

I now need to dial in power value to get good uniform distribution. I added second graph to show number of samples rather than average deltaE for that square. This is with power 96 and 100,000 samples. Starting to look good, at this power samples are slightly bunched up at blue and red part of spectrum for some reason  - but I'll dial that in when I get the time.

I was thinking of doing 3 things to asses how accurate transform is:

- spectrum image - expected vs produced

- plankian locus image - expected vs produced

- above deltaE diagram

I guess that three should give us good idea of how good camera is performing with said matrix in terms of color accuracy. I still need to see what sort of differences in matrix there will be with this power spectrum thing vs other methods as I think this will be the best for matrix generation.

In the end, I'll have to see how far away real sensors are from these QE curves published from them - by using a bit of spectrometry and calibration with DSLR as reference.

Link to comment
Share on other sites

  • 8 months later...

Does anybody has experience with Antlia LRGB PRO filters in a doublet refractor with some color aberration? Mine is ts102ed f7 and would like to know if these filters would reduce a bit the color aberration in blue. Astronomik L3 luminance filter has an higher blue cutoff, but also a higher price (the whole LRGB set I mean) 

 

Regards 

  • Like 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.