Hi, welcome, everyone. I'm Michael Royer with Pacific Northwest National Laboratory. And I'd like to welcome you to today's webinar, Understanding and Applying TM-30-15 to the IES method for evaluating light source color rendition. Brought to you by the U.S. Department of Energy, Solid-State Lighting Program, and the Illuminating Engineering Society.

So today, I will be joined by Kevin Houser. But first I'm Michael Royer, a lighting engineer here at Pacific Northwest National Laboratory. I've been here for just over four years, focusing on the DOE SSL program, the technology development issues. I'm also a member of the IES Color Committee, and was the chair of the IES Color Metrics Task Group that was responsible for developing TM-30.

So my co-presenter Kevin Houser, he'll be on in a bit. He's professor of architectural engineering at Penn State, and editor in chief of LEUKOS. In the past, he was a founding faculty member of the Architectural Engineering Program at the University of Nebraska, and manager of lighting education at the Phillip's Lighting. He was formally on the board of directors for IES, IALD and the Nuckolls Fund for Lighting Education.

He is a current board member of Lux Pacifica, and a member of the Editorial Advisory Board of Architectural Lighting. He has published 40 papers in refereed journals, and more than 70 other publications in conference proceedings, trade magazines, and other outlets. He's one of four editors of the 10th Edition of the IES Lighting Handbook and he is a fellow of the IES.

So without further ado, let's begin. So hopefully you're all viewing my slide today, or the title slide, Understanding and Applying TM-30. So what we're really talking about today is color rendition. So if we look at the CIE's definition, it's the effect of an illuminant on the color appearance of objects by conscious or subconscious comparison with their color appearance under a reference illuminant.

If we want to consider that more basically, it's essentially how lighting interacts with the objects in the space to produce the resulting color appearance. So really, then, understanding color rendition is knowing how to predict it, communicate it, and realize it.

Where TM-30 comes into that equation, it enabled us to better predict it and communicate it, so that manufacturers can do a better job of realizing their products. So specifiers can do a better job of realizing their designs. So just a brief outline of today's topics. We have two blocks, as I mentioned, with questions in between. But first we'll cover the logistics of how we developed TM-30, an overview of the method itself, and some comparisons with the existing method, the CIE CRI.

Following a brief period for questions, we'll look at a live demonstration of an Excel tool that we developed in conjunction with TM-30. And then we'll go with the final section, discussion adoption considerations for various segments of the lighting industry. And a final session for questions at the end.

So beginning now with part one, TM-30: How It Came To Be. So I came up with this little equation that summarizes all the parts that went into making TM-30 what it is. It begins with limitations of an existing metric, CRI, and acknowledgement of need for alternative. A lot of research and scientific advancement over the past 50 years, and especially probably the past 10 or 15 years.

And finally, one of the most important ingredients that separates TM-30 from a lot of the other proposals that you might be familiar with, is a consensus process that this group worked through to develop TM-30 within the IES. So we'll step through each of those components a little bit more now.

So if we look at limitations of the existing metric, and this slide is showing a comparison of CRI and TM-30. So on the left, you can see the main solar emphasis, the main components of CRI. It's based on the CIE 1964 U*V*W* chromaticity diagram. Whereas in TM-30, we've upgraded that to modern color science. A big amount of the work was transforming from eight color samples to 99 color samples, which much more desirable properties that give us a much better prediction of color rendition.

So those first two topics will largely be covered in a separate webinar next week. What we're mostly going to focus on today is TM-30. And the limitation I'm going to discuss is that CRI is really a fidelity metric only. So it only tells us a little bit about color rendering. The accurate rendition of color as they appear under a familiar reference illuminant.

But TM-30 provides a whole lot more. It complements that fidelity metric with the a gamut measure, graphical representations, and a lot more detailed information that really help all users of the metric be able to achieve their goals more readily. Also at the bottom there, just some other minor updates, that we really won't be discussing too much today.

So let's take a good look again at that fidelity metric only, and why that's a limitation for CRI. SO this is kind of better as a live demonstration, but we're doing this as a webinar here, and I'll show you some images on the screen. So if we consider this the original, or the perfect, or our target, for what we're trying to light in this space.

If I'm trying to choose different light sources, say I move to one here, that's a so-called CRI of 80. Again, this is theoretical. I've just modified these images digitally. But so this CRI leading to a desaturation in the space. However, I could've also chosen another lamp with a CRI of 80 that was saturating the colors in the space. So these two theoretical sources have the same rating according to CRI. But they're producing a very different visual effect. And there's no way to know this easily using the CRI system, which only considers color fidelity or the magnitude of the difference from the original image.

So if we look at this in a slightly different way, just sort of a graphical representation here. And we consider perfect fidelity, our target in this case, at the center of these axes. Well, any deviation from that, I could be increasing the saturation, decreasing the saturation, or shifting the hue in one direction or another.

And like I showed on the previous slide, two sources with CRI of 80 might be moving in opposite directions, but they both have the same score. And actually, there's an entire circle here of sources with a theoretical CRI, so you have 80 in this case, that have a constant fidelity. But there's no way using fidelity to figure out my exact position on this plot. So really what it comes down to, then, is that one metric is not enough to convey this information.

So if one metric is not enough, how many are needed? If we look at different attributes of color rendition, we can consider color fidelity as one of the main ones, color discrimination, and color preference. Now we have a convenient thing that both color discrimination and color preference tend to be related to the degree of saturation. Which can be quantified with gamut.

Now in this case we're going to talk a lot about gamut today. And there's sometimes confusion. A lot of times, you'll think about gamut in the display context , where gamut refers to the number of colors that can be created based on the primaries. But that's dealing with a light source. In this case, we're dealing with objects, and so gamut is referring to the overall degree of saturation. It's not like a light source can't make objects appear, it doesn't make anything disappear. If that did, I wish I knew that light source. That'd be kind of crazy.

So moving on now to the second part of this equation, the Acknowledgement of Need for an Alternative. So for this section, I'm going to present a little timeline here. So if we look, the first three entries on this timeline are the history of CRI. It was formally adopted in 1965, but the research actually dated back to 1937. If we think a little bit about that time period, essentially fluorescent lamps were the main competitor to incandescents for interior lighting. And computing power was much less than it is today.

So it's a very different situation which this was developed. It's not like the people who were developing it were at fault and made mistakes. But there's been a lot of advancement over the past 50 years. So in 1974, actually, there was a major revision of CRI. Even in those first nine years, there's some recognition of the limitations with the initial method. And since 1974, CRI's remained approximately the same.

There was another document that was released in 1995, the last revision of CRI. But that was only some typographical changes. Nothing major there. So you can see already even during 1995, in 1991 CIE formed Technical Committee 1-33 on color rendering, looking at forming a new metric. That group closed in 1999 with no agreement reached.

And I'll read a quote here from a later documented, not from TC1-33 itself, but it says, "This committee was not successful in its purposes, mainly due to the disagreement between those who advocated including the advances of science, and those who recommended that industry did not want to change."

I think this is kind of interesting, because in a way, we've been in this situation for 24 years. And are still a little bit in this situation. Where there's many people out there who would say it's too much work to change from CRI. Well I hope today in your time, you learn about TM-30, you realize that the advances in science really do provide a highly substantial advantage over the previous methods, and it's really worth any kind of effort to make that upgrade.

So the next CIE Technical Committee, Color Rendering of White LED Light Sources, was not intending to develop a new metric, but was examining the performance of CRI. And it recommended that a new metric be developed. And a quote from that group, "The committee recommends the development of a new color rendering index. This index shall not replace the current CIE color rendering index immediately. The usage of the new index or indices should provide information supplementary to the currency CIE CRI, and replacement of CRI will be considered after successful integration of the new index."

I think this is important, because it somewhat summarizes the state that we're in right now. So we actually have, now, new approved indices in order to supplement CRI in a way. And someday, eventually, hopefully, they'll reach the point of an industry consensus and can actually replace CRI.

So the latest groups, TC1-69, Color Rendition by White Light Sources, the goal of arrows to develop a single number replacement for CIE CRI. Following on from TC1-62. However, no agreement was reached, and that group was split into two groups then, TC1-90 and TC1-91. Which focused respectively on a color fidelity index and other new methods, mostly revolving around preference for color.

Now, those two groups have longer timelines, and in 2013, the IES decided to form the IES Color Metrics Task Group. And then ultimately led to the development of TM-30. Now this was seen as a way to make some immediate progress, and make some improvements. And ultimately, hopefully, this work will feed into the CIE groups, which has already somewhat taken place. But again, the timelines there and the processes often take a bit longer.

So the third part of this equation, Research and Scientific Advancement. Just one slide here summarizing some of the key advances that went into TM-30. Now TM-30 isn't some revolutionary new idea. We didn't discard all the research that's been going on, especially in the past 10 years. With lots of new proposals.

But it really synthesized and compiled all this information into one comprehensive, workable system. So the major components there are the two-metric concept. So that's going back even into just after the adoption of CRI with proposals for things other than color fidelity, such as the flattery index, or color preference index.

Moving on to things like CRI plus gamut area index, proposed by the LRC. NIST employing both fidelity metrics and gamut metrics combined in CQS. All of this work pointing that one metric is simply not enough. Another thing, the graphic display of hue and saturation changes. Again, going back almost 30 years now to the color rendering vectors. Which was also employed in the CQS system.

The use of modern color spaces CAMO2-UCS, that was put forth in a proposal for CIE CAMO2-UCS by the University of Leeds. And finally, the wavelength uniformity of samples, presented by a collaboration of different universities in the CRI 2012 proposal.

So the last thing, and again I mention, one of the most important things, was the consensus process that was used to develop TM-30. The color metrics force was comprised of seven voting members, as well as one nonvoting member. Included representations from manufacturing research government in the specification communities.

And that was done to really include interest from all of these, because they all have a stake in color metrics and color rendering. And so important that this was balance between the different groups so that no one group felt that they were having something imposed on them by the others.

So it was developed by a small group of people. But then subsequently, it went to the IES Color Committee for review and balloting. Then the IES Technical Review Council, and then finally the IES Board of Directors. In each of those stages, it required at least a 2/3 majority approval. And any non editorial revisions required a recirculation ballot. Although I can say, we had no non-editorial revisions during that period.

We did have a couple of disapproval votes that we did our best to attempt to resolve. Although over 90% of the votes, which I think is pretty good in today's day and age, were approvals. So as I mentioned a little bit in the history the timeline, that there's sort of an argument about whether we need to update or not. And I can say at least some of the disapproval that we saw in this was a feeling that we just don't need to do anything.

So another slide here about exactly what is TM-30, and what are the measures contained within it, and what is it not? So we have these four different types of documents or ideas in the lighting industry, and in other industries as well. So we have metrics and measures. So parts of TM-30 are metrics and measures. CRI, Ra, and R9 are a metric. CCT is a metric, Duv.

In contrast with that, we have criteria. So the CRI must be greater than 80, or the CCT must be between 2,700 K and 5,000 K. We also have design guidance. For example, IES DG-1. That's actually in the process of being updated. A quick plug for that from the IES color committee. That should be out fairly shortly.

And finally, we have standards. So that's American National Standards Institute documents, ANSI, or ISO documents. And IES and CIE documents can also be part of those standards. So TM-30, as it currently stands, is a method that includes several related measures. It is not a required standard, so it's not part of ANSI yet, not part of ISO. And it does not include any design guidance or criteria. Those are all things that are going to have to be developed over time as the industry as a whole becomes more familiar with the document and begins to use it.

So just to look at where we are in that process, then. We've reached the development issuance. That happened officially in August. And now we're beginning the use and evaluation stage, where I'd encourage all of you to really learn about TM-30, become familiar with it, begin to use it, and evaluate it. Because there is a chance this isn't something that's set in stone. So we can revise it to a point where eventually, hopefully, we reach industry consensus before it becomes written as a standard. And eventually someday, like I would argue CRI is at now, obsolescence.

So with that, I'm going to turn it over to Kevin, and he's going to step you through the TM-30 method is, exactly what the indices mean, and how to understand them.

Thanks Mike, and thank you all for being part of today's webinar. I'm going to pick up with an overview of the TM-30-15 method. The IES method has many different components, but they're unified with a single underlying, self-consisting calculation engine. If you open the TM-30 document, it may at first intimidating because the underlying color science requires a healthy dose of math, including matrices, integrals, logarithmic transforms, and the like.

However, if you prefer, you can treat the computations as a magic box. Where the input is a light source spectral power distribution, and the output is a series of indices and graphics. All of which are relatively easy to interpret and understand. As Mike mentioned earlier, if you're interested in the math, you can tune into next week's webinar. Today I'm going to focus on interpreting the outputs.

The three most important outputs are the fidelity index, Rf, gamut index, Rg, and one of the color graphics. The system also includes secondary indices, some of which are listed here. I'll focus on those top level indices and get to them in a little more detail now.

The fidelity index listed here on the left, Rf, has a scale from 0 to 100, with higher scores meaning the source will render colors more similarly to a reference. Note that higher numbers are not necessarily better, and the highest possible fidelity score may not always be the most appropriate source for a given application. IES-Rf was designed to be a more accurate version of the CIE General Color Rendering Index, Ra.

When the fidelity index is greater than 60, I'm showing my mouse pointer, the gamut index will be in a range of about 60 to 140. Reference illuminants have a gamut index of 100. So this value indicates that a test source may have a gamut larger or smaller than the reference. Finally, the color vector graphic provides a visual description of hue and saturation changes. It is a non numerical complement to both Rf and Rg.

This particular color vector graphic is for high pressure sodium lamp, and I'll show you some more details about that lamp momentarily. Let's look at some of these components in a bit more detail. Shown here is a two dimensional plane from the j prime, a prime, b prime color space. The black circles represent the 99 color evaluation samples when illuminated by a reference source.

The red diamonds represent the same 99 color valuation samples when illuminated by a test source. If the diamonds perfectly overlap the black circles, then the fidelity index would be 100. In this case, shown, because there's not overlap, the fidelity index will be less than 100. You can refer to TM-30 for the mathematical details, or simply use the Excel spreadsheet that is included with TM-30 to run the calculations for you

I'll demonstrate that spreadsheet a little later in this presentation. On this image, the a prime b prime plane is divided into 16 wedges or bins. Each bin contains numerous color evaluation samples. To determine the gamut index, the a prime and b prime e are averaged for the color evaluation samples within each bin. And this result is shown at the right.

The area with the black polygon represents the gamut for the reference illuminant. And the area with the red polygon represents the gamut for the test illuminant. The ratio of these two areas is multiplied by 100 to determine the gamut index Rg. And as indicated on this slide, an Rg value greater than 100 indicates an average increase in saturation. Whereas a value less than 100 indicates an average decrease in saturation.

Note the use of the word average. As the plot shows, the plot on the right, some colors may actually increase in saturations while others decrease. Which is why it's also prudent to refer to one of the graphics, which we'll come to in just a couple slides.

First, let's revisit the example that Mike presented earlier. Here we have the original image on the left, desaturated in the middle, and red enhanced on the right. If we only use CRI, we'll be unable to capture what is occurring in the desaturated and red enhanced images. The same will be true if we only used the EIS fidelity index Rf.

But when Rg is also used, we now have useful supplementary information. For the middle image, we see an Rg value of 90. Which is less than 100, and indicates that on average, color will be muted with this light source. For the right image, we see an Rg value of 110, which means that on average, there'll be an increase of saturation.

Note, however, that because Rg and Rf values are based on averages, they do not indicate which colors will be distorted. Which leads to the intrinsic limitation of average values. The SPD on the left is for a typical blue-pumped LED, with a CIE CRI of 82. And the SPD on the right is for a compact fluorescent lamp, also with a CEI CRI of 82.

Examining the eight test sample colors – 1, 2, 3, 4, 5, 6, 7, 8, that I'm showing my mouse pointer – on the left versus the eight on the right, shows that they are actually rendering those eight differently, even though they both average out to 82. The point of this is that average values can hide important information. And an equally important point, is that Rf and Rg from the IES method are not immune.

This is why it's also important to use either the color vector graphic, shown on the left, or the color distortion graphic, shown in the middle. Let me explain, and try to follow my mouse pointer on this. On these graphics, the gamut of the reference illuminant is normalized to a circle. On the color vector graphic, that circle is a black circle, on the color distortion vector graphic, that circle is a white circle.

I'll focus on the color distortion graphic here. The black area within the white circle indicates areas of desaturation. And when the colors extend outside of the circle, that indicates area of greater saturation. So for example, with this particular light source, which is the blue-pumped plus phosphor LED. This particular example is desaturating some of the red, it's providing a little bit of additional saturation to the greens and yellows, or at least the green-yellows. It's providing some desaturation to some of the green and green-blues, and some additional saturation to the blue and blue-purples.

So this is non-numerical information that's not captured in the Rf value of 81 or the Rg value of 101. But if we look at all of this as a collection of information, we provide more data than some of the parts. On this slide and the two that come after that, we'll look at how Rf, Rg, and the color vector graphic relate to each other. All the sources we're going to be looking at have a CCT of 3,500 K and are on the black body locus.

On this slide, we have one point plotted for a source with Rf and Rg both equal to 100. And this is a fixed point, so if Rf is equal to 100, then Rg also has to be equal to 100, and vice versa. This represents a reference illuminant, or another source that can achieve the same rendition of the 99 color evaluation sample. In this case the color vector graphic is simply a circle where the test and reference illuminant overlap. Here's the spectral power distribution for this particular source that renders things very similarly to a reference illuminant.

Let's make this a little more interesting. Let me show you the SPDs. These three SPDs, Spectral Power Distributions, have the same values for CCT, R sub f, and R sub g. Probably just by looking at the SPDs, you can infer that they would render objects differently. Even though they both have a value of 65 for R sub f and the value of 115 for R sub g.

Here now, we're looking at the color vector graphic for those three spectral power distributions. The one on the upper left, we can see that this very significantly increases the saturation of red object colors. Whereas the one on the lower left very significantly increases the saturation of the greens and green-yellows, while slightly desaturating the reds. And this one in the upper right is a different one entirely.

So these three spectral power distributions, again showing the spectral power distributions there, even though they have the same scores for Rf and Rg, we really need to also refer to the color vector graphic to fully understand how color distortion is going to occur for these three sources.

This slide is similar to the previous, but now showing three sources with reduced gamut. Note the six SPDs on this and the previous slide all have the same fidelity. And this should, I hope, illustrate the limitation of only considering fidelity without also considering gamut and gamut shape.

I should also note that these are not commercially available sources. They were selected to illustrate the range of what is possible. And that's really part of the hope that many of us have for TM-30. These tools are a new way of communicating the color rendering performance of light sources. Though, in addition to the using by specifiers to evaluate sources, the same tools can be used by a source manufacturer to engineer new sources with purposely designed spectra.

In the past, there's been, I think, less of an incentive to do so, because there was no clear way to communicate product performance. The new measures and graphics in TM-30 have the potential to encourage new and innovative products, and perhaps new and innovative marketing of those products.

The next five slides are snapshots of the performance of typical sources that most of you will be familiar with and recognize. The first is a halogen MR16. The source number up here, source number 80, is from the Excel spreadsheet that I will show you in a minute. The input is the light source spectral power distribution. So again, looking at input and output, the red line here represents the light source spectral power distribution. The black line following my mouse pointer represents the reference source. And the images on the left represent output from the spreadsheet or from the TM-30 method.

This image on the upper left is a bar chart that has 99 lines, each one representing one of the color evaluation samples. The numbers are not shown, but there is zero for the lower left and 100 for the upper right. So in this case, because this is a halogen MR16 that has a fidelity index close to 100, all of these values are also close to 100. So there's not much variation.

Let's look at a source that has a little more variation presented. This is showing summary information for a neodymium incandescent lamp. This is a source that is commonly preferred, based on numerous past studies. And here we show how the IES system is more capable of predicting this preference than CRI if CRI is used alone. We see that the Rs score is actually quite good, a nine point increase from CIE Ra. And the gamut index is greater than 100, indicating an average increase in saturation.

And the color vector graphic shows that reds and greens will undergo increased saturation without too much hue shifting, Which, is based on past psycho-physical research, would indicate that this source would be preferred. This is showing a high pressure sodium lamp. I'll let you study the details of this particular image on your own later. This next slide is showing a tri-phosphor linear fluorescent lamp. You can see that the IES fidelity index is six points lower, 80. Versus the CIE color rendering index of 86. This is because tri-phosphor fluorescent lamps have, in the past, been optimized around the eight test sample colors that are part of the CIE method. And when we now have 99 color evaluation samples, selective optimization like that becomes much more difficult.

This is summary information for a ceramic metal halide lamp. And again, I'll leave that there for you to study on your own. I want to come back to the example that Mike presented before, and pick up with the color vector icon. So coming back to this, if we add the color vector graphic on top of this, we can now see that we now have additional information to truly understand what's happening with the color rendition performance of these light sources.

So looking at the numbers, we see that the fidelity index for the red-enhanced is 78. The gamut index is 110. So here is indicating that we have an increase in saturation, however, we don't know where that increase in saturation is occurring. And likewise, for the desaturated, we see that it has a gamut of 90. So we know there's desaturation. But we don't where that's occurring.

So if we look at the color vector icons, we can see that the desaturations is actually occurring in the red primarily for this source. Whereas the enhancement is primarily occurring in the red, and to a lesser degree in the green, for this particular source. There's also as some small distortions that are occurring in the original source, because that one does have a fidelity index less than 100. The fidelity and is 93.

But the gamut is still 100, which means all of the desaturation is counter balanced by an increase of saturation somewhere else in the source. So again, my point being here that we can use these three components, the fidelity index, the gamut index, and the color vector or distortion graphic together in order to have a good understanding of what's happening with light source color rendition for a particular spectral power distribution.

Now, we hope that people will consider both the fidelity index, the gamut index, the color distortion graphics, in tandem, as I just talked about. However, some people might be primarily interested in something that's an improved version of the CIE color rendering index. And that's how the IES fidelity index has been conceived. And so here, I want to plot some common light sources, and show how those relate to one another.

If the source plots right on the 45 degree line, shown in red, then the value for both IES Rf and CIE Ra will be the same. If a source plots above the line, then it receives a higher score for IES Rf than it does for CIE Ra. The two points shown right here in red where my mouse pointer is, those are for a neodymium incandescent lamp, which scores higher on the IES Rf method than it does for the CIE Ra method.

I just added narrow band and broadband fluorescent lamps to the plot. In a darker green, the narrow band fluorescent lamps shown always receive a lower score for IES Rf and they do for CIE Ra. This is because their spectral peaks are optimized for the eight test sample colors in the CIE system. The 99 color valuation samples in the IES method represent a tougher test.

In other words, it's necessary to have good rendering across the entire visible spectrum to achieve a high IES Rf value, rather than being able to focus on key wavelengths regions as well as possible, and if possible, with the CIE Ra method. Here are some HID lamps, added in purple. On this slide, hybrid and color mixed LEDs have been added. Here we can see that some score higher with the IES method, and some score lower.

And finally, this scatter is for phosphor converted LEDs, most of which score lower with the IES method. Again, the IES method is a tougher test because it requires good rendering across all 99 color evaluation samples. Also added here, is showing the spread.

So if we take the CIE Ra value of 80, so that line that my mouse is following right now, we can see that the IES Rf can be as low as about 72, or as high as about 87, for the particular sources that are plotted here. These do not represent absolute limits. So we have approximately a plus or minus eight point spread, which means that IES Rf is not always resulting in a lower score. It may sometimes result in a lower score, and may sometimes result in a higher score.

Here, I think we're going to pause, and we're going to take some questions before moving on.

OK so I'm getting questions rapidly now. We'll do our best to answer some of them now. Some of them will be really well answered in the presentation, the webinar next week. A couple sort of logistical ones. "I purchased TM-30 from the IES store. How can I get the Excel calculators?" Well I can announce that, as of this morning, those are available at the link you received when you purchased the TM.

Another one I'm just going to touch on briefly, "How are the 99 colors selected?" This is a very involved question, actually. And this is probably a large focus of the webinar next week. So I invite you to tune in for that. Really quickly, they're down selected from a huge library of 105,000 real objects. We made sure that in selecting those 99, they covered that entire space evenly. And they also didn't favor any wavelengths.

So as Kevin was showing with CRI, certain wavelength are essentially privileged in that calculation. So that when you tune the peaks that fluorescent lamp, for example, you're trying to optimize score. But you might not actually get a realized effect. So that's much more difficult to do with those 99 color samples.

I'll throw this one to Kevin, another question. "What do you mean by saturation?"

Another word for saturation is chroma. So essentially what we're doing is we're increasing the vividness of the color appearance of objects. So for example, if we take something like a red apple. That red apple would appear a richer more vibrant, more vivid red under a source that enhances saturation. Or would appear a duller, muted, more gray red under a source that would decrease saturation.

All right, thank you. I think we had probably the most questions on this topic, actually. Maybe it was something we should have actually included in set upfront. But there's many question about what exactly is the reference. Some of them, what is the perfect fidelity baseline based on, are there multiple references sources, depending on the CCT. What are the reference light sources used at each color temperature? Kevin, do you want to explain that?

Sure. So we've primarily used the same system that was used in the CIE method, with a small difference. Below a color temperature of 4,500 Kelvin, and we use Planckian radiation as the reference. And above a color temperature of 5,000 K, we use a model of daylight as a reference. And between 4,500 and 5,500 K, the reference is a blend of Planckian radiation and daylight.

The reason that there's that blend between 4,500 and 5,000 K is so there's not a discontinuity. In other words a jump between what happens at 4,999 K and 5,000 K, which is the case with the CIE method. So we've tried to avoid that. We think that is maybe important for some color tunable products, where the color tuning will go between 5,000 K. And so by having this consistent transition all the way from low color temperatures up to high color temperatures, we avoid that discontinuity.

Again, one point that was made but I think is worth reiterating here is, the reference is just that. It's a reference. It doesn't necessarily represent an ideal light source, and there are many instances where it might make sense to lower the fidelity and change the gamut. Probably increase the gamut. And that might lead to a light source, that's more preferred for a particular application.

Just to give one example of that, if we were light fruit and vegetables in a grocery store, rather than having a fidelity index of 100 and a gamut index of 100, it might be more preferred to have a fidelity index that may go down to a number like 80. And a gamut index that may go up to say a number like 110 or 115. So that those fruits and vegetables actually appear more vivid, more appealing, more flattered than they would under a reference illuminant.

Great, thank you Kevin. Hopefully this is satisfying answers. It's little hard to do these question and answers in this format. But we'll do our best, and if you have any additional questions, we can do our best answer them at a later date as well. Another question that's coming in, "What would take the place of R values?" I assume that's meaning things like R9 or R1 through R8.

And another question related, "Are the 16 bins for Rg or numbered for discussion relative to one another?" I'll go ahead and answer this one, I guess. So Kevin, in his initial slide showing the core calculation engine and the outputs, there was at the bottom of that these more detailed values. So yes there is values to replace the R values. So in any one of those bins, the 16 bins that we're using to calculate the gamut, we can also calculate a fidelity value or a gamut value – a increase in chroma value for that specific bin. So what we have essentially would be analogous to R1 through R16.

So for example, and this is numbered this way intentionally, so the hue bin one is actually the red bin. Sort of the slightly red-orange. Like even 16 is slightly sort of red-purple. Then they go around counter clockwise around the circle there. So you can specify, for example, if you're particularly concerned about reds, that you want to know about the R1 value for the fidelity of Reds. Or the chroma increase or decrease value for that hue bin one.

I'm trying to sort through questions live as I'm talking here.

Mike, as you're sorting through those questions – actually, when we're done with the questions, I'll bring up the Excel spreadsheet tool. That's the next part of our presentation. And I will very briefly show some of that hue bin. For the people that get the TM and get the spreadsheet software, they'll be able to play with this in a lot more detail on their own time.

OK, so the last question we'll do in this batch, again, we'll come back and do a second batch of questions at the very end. And I'll keep sorting through these as they're coming in and see if there's anything from this set that we need to answer. "Although I do understand this is a useful method for professionals, how do you think we can convey these metrics to consumers?" Kevin?

Well, it's a really good question. The color distortion graphic we hope will be something that can be easily interpreted by consumers. But truthfully, I think in order to go from the professional audience to the consumer audience, there's probably another bit of work that needs to be done. Maybe focus groups and maybe the development of different ways of communicating this information.

I mean truthfully, the information here, we hope that it will be useful to specifiers. We also hope it'll be useful to the people that are designing lights source spectra, and providing us with light sources. So that they are able to create some novel products and promote those novel products, primarily to the professional community. And I think we still have work to do to determine how this is going to be communicated to, say a general consumer that's buying their light sources in a grocery store, or a home center.

OK, so there's still some great questions that we're going to get to, but I'd like to now continue with the presentation so we have time to get through it all. And Kevin's going to continue with a demonstration of the Excel tool and more discussion of how all this works together.

OK. So on my screen I just opened up Microsoft Excel, and as Mike indicated, if you purchase TM-30, you'll get a link to this particular spreadsheet. And this is part of TM-30. Just going through the tabs on the bottom, I just clicked on Version Notes, which provides some basic information. There's an Instruction tab, which I would encourage you to read through in detail.

We have a tab with each description, and then there's three tabs in green. A main tab, IES Graphical tab, and IES Results tab. And when you're using it, I think most users primarily will be using the main tab, IES graphical, and perhaps some people will use the IES Results. But primarily the main and the IES Graphical.

When you open this, I should say macros need to be enabled. Macros are already enabled, so that window did not pop up on my screen I opened this. What we have here is a drop down menu. And the spreadsheet includes a library of a little over 300 different spectral power distributions. It includes CIE standard illuminants, various fluorescent, various HID, various halogen, incandescent, whole bunch of hybrid blue-pump LEDs, RGB, RGBA LEDs, LED phosphors with blue and violet pumps, theoretical luminance, and so on.

So it's not an exhaustive library, but it's a fairly large library of different light sources. It also has the capability that, if you want to input your own spectral power distributions, you can easily do that. And then calculate all these particular numbers and induces and graphics for whatever spectral power distribution is of interest to you.

I'm just going to pick up one light source here. I'm going to pick an RGBA light source. I could do anything. It's calculating right now, you can see it's updating. Doing all that in the background. Going a little slow because I have so many things open on my computer right now. And here we see that update.

So this image in the center, the black line is the reference illuminant, and the red line is the red green blue amber LED components that create this particular spectrum. We see the high level of numbers. There's the fidelity index of 86, the gamut index of 109, in this blue box where the cross mouse pointer is over top of the screen. TCT is 2960, we have Duv. This is a negative number, which means it's slightly below the black body locus. We have chromaticity coordinates, and we also compute the CIE Ra value, all included on this front sheet.

The color vector graphic and the color distortion graphic are computed automatically for you. The scatterplot over here to the right plots – this may be hard to see on your screen, I'm not sure – but there's a little red dot within this matrix, where all these other gray points are the 318 in the library. And the red point is the one spectral power distribution that we just plotted now.

You can mouse over and there's notes. So for example, if I mouseover Duv, it will indicate that refer to CIE C78 377. And so on for the various documents that these are relevant for. Let me go to the IES Graphical tab. The IES graphical tab contains additional summary information. For example, we have an R9 value that's listed, a luminescent efficiency of radiation value. We'd also have the skin value. The fidelity for skin value in this particular example is 90. The reference is what it would be for the reference illuminant.

This is additional color metric information. Now since it's says graphical, this is primarily graphical results. We scroll down the page, we see a chromaticity plot on a 1931 chromaticity diagram. Scroll down further, we can see the color evaluation samples, and how they plot for both the reference source and the test source, if we want to look at this in greater granularity.

If I scroll down further, we can see that each of the 99 color evaluation samples are plotted. We also provide an iconographic representation of how color distortion will occur for these 99 color evaluation samples. And I really don't know how well this might show up on your particular computer screen.

But for example, this particular light source cause some distortion in the reds and the greens. So if we look at color valuation sample 53, we can see that under the reference, it would look like where my mouse is moving now. And under the test it would look like under my mouse is moving now. You might be able to see some of those differences in the red as well. If we pick a highly distorted light source, we could see these differences in a much more pronounced way.

Scrolling down even further, these are giving the 16 hue bins. These are basically going around the hue circle, starting with bin one, two, and all the way through 16. At the top, we have the number of color evaluation samples that are in each hue bin, and we have a score by each hue bin. So as Mike was alluding to, like the R values like R9. For example, the value in R1.

Now this is an average of 11 color evaluation samples that occur in bin number one. But we have, in this case, an index of 85 out of 100 for fidelity. And you can go around the he bin for that.

If we want to drill down further, we can also look at specifically what is happening with chroma or saturation. So up above, we only know that 85 means it's different than the reference, but it doesn't tell us whether that an increase in saturation or a decrease in saturation. That's what this tells us down here.

So on average, we're getting a 6% increase saturation in hue bin number one, or a 5% increase in saturation hue bin seven for this particular light source. So in general, this particular light source is creating, on average, an increase in saturation, except in hue bin four where we have a decrease in saturation. And then finally, if you want to really look at this in a very granular way, you can look at the plot for all 99 color evaluation samples, and you can see what's happening with each of the 99.

I think I'll stop with that as a brief overview of the spreadsheet, and turn things back to Mike.

One second here, I'll get fired up. OK, so this last section, and then we'll go back to questions again, is about TM-30-15 adoption considerations. And actually, in just sorting some of the questions that came in earlier, there are actually a lot of questions that related to this, too. So I can go to address those now, and if there's anything further we can address it at the end.

So this slide shows four different boxes here really summarizing to me the adoption stakeholders. So there's specifiers, manufacturers, researchers, and codes and programs. I think the key thing here is that not one of these groups can necessarily take the lead in adopting, pushing TM-30 to an industry consensus. Really takes push and pull between specifiers and manufacturers, with input to that process from the researchers and the codes and programs.

So for example, specifiers need to evaluate sources. They need to rethink a little bit color rendering, going beyond just color fidelity. But to do that they need data from manufacturers. We need manufacturers to engineer new sources that aren't just trying to optimize fidelity, that can be marketed and really differentiated from other products.

Then researchers, as we mentioned early on, TM-30 is a method and measures right now. It doesn't include design criteria. So developing that type of criteria based on human factors research is going to be important. And implementing that criteria in the codes and programs that already do implement criteria is a final step.

So we're going to step through each of these groups now in a little bit more detail. So first manufacturers. This idea of being able to use and go beyond fidelity isn't exactly new. Kevin already showed examples of a neodymium incandescent lamp. Something that's been around for, I don't know, a couple decades now I think.

But really, with using CRI alone, there was no way to effectively quantify that. It got a much lower CRI score, so anyone who wasn't actually familiar with it visually would probably assume it was worse. So it was probably a little bit difficult to market it in that way. Now we have the tools available to us to help manufacturers differentiate their products.

So I keep referring back to that argument about, is worth the effort to convert to the new science. And I think manufacturers have a lot of burden in that conversion, because they have catalogs of products and things that are designed. But I think manufacturers have also the most to gain, actually, from this new method. Specifies have a lot to gain too, the manufacturers now have a whole new playing field with which to differentiate and market their products. I think it really is advantageous for them.

So some of the questions that manufacturers are going to have to ask themselves now. We're open to spectral engineering with LEDs. It's a new age where we can really customize spectral power distributions. So here, for example two SPDs, both the same CCT, both the same light output. They're going to have very different effect on color rendering.

Where before, they might have been characterized the same, I can see on the icon here that one of those is essentially saturating the greens and desaturating Reds. It's going to lead to a higher luminous efficacy of radiation. So a higher potential efficiency for that spectral power distribution. But if we look at the top one, it might have a lower LER, but it's increasing red. Which, if anyone is familiar with R9, reds are very sensitive to us. And so increasing that saturation might be actually the preferred source.

So these are kind of the trade offs that manufacturers are going to have to discuss and go through, and really use these new tools to make those decisions more appropriately. I think the biggest thing for manufacturers in the short term is simply providing data. So calculating the measures for any sources that's already existing should be very simple. It's not like anything has to be measured again, because the spectral power distribution was already measured when the lamp was created to calculate CRI.

So simply using the calculator that the IES is providing or coding this into their own software, which is certainly possible and I believe some people are working on that already, should be relatively short order solution to providing specifiers and others with the necessary data.

So moving on to specifiers. I want to say TM-30 is an approved method. Use it and provide the feedback. See how it works. Compared it to your existing expectations for sources. Does it align? We've gone through this. We've been working on it for about two years. It really seems to us the people who are intimately involved with it can match our expectations better. To do a better job of characterizing average color fidelity, color gamut, and really aligning with what we think of a source.

Now the question also is going to become, how do I choose a better light source? There are questions on this too about color preference and things. Before, we had CRI alone. It was a fidelity index with a maximum value of 100. But it's kind of easy to look at that index and say I want the higher value. Even though, as we've demonstrated today, choosing the higher fidelity value might not always be the preferred light source.

But there was a simple numerical way if you got to check a box or meet a criteria, it was easy to do that. Now you could do the same thing if you're only wanting to consider Rf from the TM-30 method. However, I would argue that if you're only using Rf from this method, you're really selling it short.

So then the question becomes, what values for Rg do I want, and what shapes do I want for the icon? And the question there really varies depending on my application. So for example, if I'm in a retail store, my preferred light source in that case might be something that increases saturation to make things pop. I might not want to go too far, however because if I'm really increasing saturation, when the customer gets the product home and it looks completely different, that might lead to more returns or dissatisfaction with the product.

If I have an office situation, maybe saturation isn't much of a concern there. Maybe I just need to have a reasonable fidelity value so that things generally look normal. If I'm in something like a pain store or a textile factory, where color matching is critical. In that case, I would probably want to optimize fidelity and go for as high a value as possible so that there's constancy in the colors as I'm moving into daylight or an incandescent reference source.

I probably wouldn't want to increase saturation in that case, as then it is still distorting the colors. Even though I might like it better, it will probably be different when I get it home. Same types of situation all these applications, grocery stores, it becomes really a question of what is my experience in the past been? What new sources and tools will have in the future to affect this? Can research step in and provide some guidance here?

But I think really the key takeaway is that trying to specify a single number for preference – I personally think, and I know we talked about this on the committee a lot – but is really a misguided approach. Because preference is really an application dependent criteria.

So again, containing with what specifiers might go through. Another example to work through this, we have an original baseline image here with a little bit of a graphic shown. If I previously had this CRI 80 lamp that was causing a positive hue shift, where now I can see I still have a lower fidelity score, my gamut's 100, so I'm not really on average increasing or decreasing saturation. But if I look at the icon, I can see a shift in hue.

I could do the same thing in an opposite direction for hue shift. I can increase the saturation, and I can see that with Rf and Rg and the icon. Or I can decrease the saturation. So again specifiers will have to work from experience. The color rendering of any lamp you have, when visually evaluated, doesn't change just because we have a new metric. So something that you really liked before, well maybe find out what those numbers are, use the calculator. Maybe that becomes your target or your specification for your design.

Moving to the research segment of the industry, something I'm directly involved in. This is actually an experimental set up that we just completed the initial experiment in a few weeks ago. So this is at PNNL here in Portland, Oregon. And this experiment, it's really preliminary data and honestly we just got the data. And it just made it into this webinar. So this is very preliminary, but I'll share it with you, and you can look for more on it coming in the future.

We had 28 different spectral power distributions, all made from the same light source, all the same CCT, all the same illuminants. Three different examples of what you might see there, exploring the full range of the two dimensional Rf Rg space. Now, number one there on the right, essentially perfect fidelity. Number two, highly increasing saturation at necessarily a lower fidelity score, and number three decreasing saturation again. There's deviation from the reference, so when you're going to do that you're going to lower the Rf score.

Now also, in addition to that, we had within each of the points in the Rf/Rg space – we tried to create color icons that distorted the colors in different ways. We tried to maximize the red saturation at that given Rf/Rg level. We tried to minimize the red saturation at that Rf/Rg level. So, it's not the overall increase or decrease in red saturation, but given those criteria for Rf and Rg.

So you can see these two very different sources have the same Rf and Rg, and perform very differently in terms of how they're affecting colors. Now this is the juicy part, you get to the results. We get three different sort of heat maps here, and these correspond to these points in the Rf/Rg space. So this is the combined results from the minus red and plus red, the two SPDs at each point, and then separated out is the minus red and the plus red.

So, a few key takeaways here – again, this goes back to that question of preference and we ask many more questions than this. I'm just isolating this one for this presentation of the preliminary results. But so on a scale from one to eight, they were asked, do you like the way this makes the colors look, or do you not like the way this makes the colors look? So you can see the green here are the most preferred sources, in this specific application. It was sort of an application agnostic condition, but it is specific to that situation. If we had asked them in a museum or residence, something might have been a little different.

So that's something we will continue to explore as we conduct more research. But you can see the most green here is focused around increased saturation, even at lower fidelity. In no case was perfect fidelity, or the closest match to that incandescent – or black body radiation reference in this case, because it was at 3,500 K – was that most liked source.

We can also see a big difference between the minus red and the plus red plot. So any time the red saturation was increased, it was more favored, compared to the decreased red saturation, except when you get to these extreme levels of saturation, where perhaps these were too much saturated. So going a little bit of a step further here, and – we can look at these two areas right here that are circled, which is these two SPDs right here – and I can go forward then to this pair of charts that shows commercially available light sources.

And so if there's anything that shows why it's important to have these improved color rendering measures, I think it is a slide right here. If we look at where most of our sources are, right here, they're actually in this red region that is not very well liked, and most of the sources that are commercially available are in this red desaturating area. So you can see that this now gives us targets where we should maybe be focusing on developing more sources in this region here, or in these regions here, where we're increasing red saturation.

So, last segment here of the market stakeholders energy efficiency and incentive programs, and there were again, questions about this that were coming in, in the first part of the presentation, how is Title 24 Energy Star DLC going to adopt these new measures or if they are going to adopt them? So just to lay out a few options here for you, with some pros and cons of both. It's quite possible to keep using CRI. It is still a recognized document within CIE. It is still in the ANSI C 78.377 standard. There's no disruption there, but it's essentially continuing to use a metric that we know is outdated, and might not be very effective.

Option two, replace CRI with Rf only, without specifying limits for Rg. This has the limitation still only specifying fidelity, although, at least it's a more accurate measure of fidelity. It's relatively easy to implement, but what happens with Rg? In a sense, this could be a positive to start, because we don't really know what the limits for Rg should be. Really, Rg is a measure of preference, so what is the goal of these energy efficiency programs? I would say, it's to weed out the really bad products, and prevent energy efficiency from completely overtaking color quality. Now, I don't think the purpose of this however, is to limit design flexibility.

So, not specifying limits for Rg might be an appropriate approach at this point. The third option, to replace the criteria for CRI with criteria for both Rf and Rg. Ever again, I would caution that limiting Rg too much could preclude some very viable sources. And the fourth option, include nothing on color rendition at all. In my opinion, this is not a very appropriate option, although it is on the table, because it would reduce color quality, given inherent relationships between energy efficiency, and color fidelity, and color rendering.

[INAUDIBLE] some of these changes that might happen what their effects might be. Going back to this plots that Kevin showed earlier, comparing CIE, Ra, and IES Rf – and there were a couple of questions on this too, and I'll try and highlight this right now, so we don't have to return to it later – daylight models, so daylight changes throughout the day. We have mathematical models that show us representations of daylight at different color temperatures as it's changing throughout the day.

Narrow band fluorescent, that's most fluorescent you'll see in offices that are recently designed, at least. So, that's spiky distributions with triphosphors. Broadband fluorescent, if you have old T12 lamps, perhaps they might be broadband fluorescent. They use a different set of phosphorus that were broad emitting, instead of very spiky. They're less energy efficient, but you can see that, at least according to this chart, their scores don't drop as much, because there was no real good reason to optimize it, because you're not moving those energy spikes around. Hybrid LED is a blue pump led typically, plus either some type of red emitter, or some type of other modification to the spectrum. mixed led would RGB, RGBA, mixes of individual colored primaries, and phosphor LED is your standard blue pump or violet pump LEDs.

So anyway, back to where I was, after explaining that question – if we see our previous criteria was CIE Ra of 80, all the sources in the gray didn't meet that criterion. If we would simply change to a criteria of Rf must equal 80, we end up with all the sources in gray here, not meeting that criterion. So, all these sources in red previously met the criterion, but no longer would. Sources in gray don't meet either, and the sources shown in green now meet the criteria, but wouldn't have previously.

So in setting new criteria, there's always going to be some change in these sources that met the criteria or didn't meet the criteria, but it becomes a question of, should every source that previously met the criterion and still meet it? Or, do we trust that the new method is actually more accurate representation? Maybe those sources shouldn't have met the criterion.

We can look at this again, changing the threshold a bit. So if we go from CRI Ra of 80 to an Rf 75, we change the shape of those areas that are changing the sources. So, this is going to be a discussion that's going to have to play out over time. I don't see immediate implementation of this in energy standards, codes, efficiency programs, but I don't think it's necessarily going to take that long of a time. It's more of a discussion that needs to happen, not necessarily new work.

So this is our final slide. Not necessarily any general conclusions, because there is a webinar part two coming up next week, and this is really the technical discussion. So, there's some questions about the reference sources – we're going to cover that in detail – the color sample selection process, why color space uniformity and wavelength uniformly matter, all the calculations and the math behind this, the binning processes. That will be covered in the technical discussion of TM-30 to follow next week. There's also a link here to actually go to the IES website, and purchase the TM-30, where you can also get access then to the calculator tools.

Upcoming editorial in LEUKOS describing, in words, some of those last slides that I just presented about what happens next, what are the adoption considerations. There's a journal article, open access development of the IES method for evaluating color rendition of light sources, in Optics Express. That also has a lot more technical detail about the methods and the development. And another LEUKOS article explaining why some of these technical advances really make a big difference in how we're characterizing color rendering.

Upcoming live presentations, you'll see me or Kevin, or some of the other committee members at IALD, PLDC, IES, DOE workshop, and the IES color research symposium. So we're going to be busy talking about this. It's a good chance to provide feedback at those live presentations, have a discussion with us. It's sometimes hard, with these questions, to understand exactly what you need to know, but we'll do our best here, and open this up for questions again. OK, give me just one second, and I'm going to go back to my--

Mike, while you're catching your breath and looking at the questions, why don't I answer a couple questions?

Sure.

OK, so one question here, is the TM-30 Excel calculator different from the CQS Excel calculator? The answer to that is yes. So the IES TM-30 method is not the same as the NIST CQS system. However, Yoshi Ohno from this was a part of the color metric task force that developed the TM-30 method. And certainly there was many components of IES method that are inspired by the NIST CQS system. So, they are different. Nevertheless, they do share some common goals, and some common components, at least conceptually, if not mathematically.

OK, so, there are a number of questions about color preference. I think I covered that, but again I want to reiterate that, at least in my opinion, a single number for color preference is not an appropriate way to specify that across all applications. I think it varies from application to application, what is the best source. Kevin, if you want to keep going a little bit, how does this method differ from CQS? A couple questions related to that.

Sure, so, CQS has an index called Q sub-a, which is promoted to be a equality index, which is essentially a version of a modified fidelity index. The IES method does not include a modified fidelity index. It includes a pure fidelity index, a gamut index, and the graphical representations, as well as subindices, like the skin fidelity index and so on.

There was a lot of active discussion around that topic of whether there should be some type of quality index, but in the end, the consensus of the committee was to keep a pure fidelity index, to basically continue the concept of color rendering index – in other words, to provide an updated and improved version of the CIE color rendering index, as essentially a direct replacement – and then supplement that with another index that would be related to gamut, which itself has relationships to preference and color discrimination.

One other difference – I mean, there's actually a lot of differences – but one other difference is in the color evaluation samples. I think one of the primary technical advances of the IES method is the 99 color evaluation samples that are uniform, both in color space and wavelength space That will be discussed in much more detail in next week's webinar. The CQS system has 15 color evaluation samples, and it's based on the numerical simulations that we ran as part of TM-30 development. 15 color evaluation samples will not be sufficient to adequately sample color space, and to provide uniformity in wavelength space.

OK, a couple questions here, and Kevin, I think that these are coming in faster than I can even read them. So I'll pick out some questions that are interesting to me, and I'll let you pick out some questions that are interesting to you, and then we can go back and forth that way.

Sounds good.

So I'm seeing a couple questions here on the skin value, at least three of them, I believe, and we didn't explicitly discuss this. We are time limited a little bit. So the skin fidelity index, we purposely included two physically measured skin s, skin reflectance functions in the '99 CES set, because over the years, skin has become an important consideration in color rendering. So, the library of skin tones I think was several thousand. It was many, many skin tones, and if you actually plot those, they all plot very similarly in the a prime b prime plane, in terms of their chromaticity.

So, we see a lot of variation in our skin tones, but actually, in their reflectance functions, they're not all that different, which I think is really cool. But it brings us to the point that those two that were specifically selected were the two that provided the best average representation of that entire library of skin values. So, I believe it's sort of a lighter one and a darker one, but again, it was the two that provided the highest correlation when you calculate a skin index for just those two, compared to the entire large library of skin reflectance functions. Kevin, got something?

Sure. Actually, a lot. There's a lot of really great questions in here. What will we call the SPD file that we receive from manufacturers? Well, it's a good question, and it's yet to see how manufacturers will provide the spectral information. As we indicated, there's a library of about 300 spectral power distributions that are included in the spreadsheet. We hope that if you call your manufacturer or distributing agency, they will be able to provide spectral power distribution data for you. Certainly the manufacturers have that. Whether or not they're willing to share that actually remains to be seen. We don't know. The files could be provided, simply in Excel format.

The spreadsheets will allow you to put any – basically, cut and paste. You just need to indicate the start wavelength the end wavelength, the wavelength interval and the spectral data. And you could put that right into the Excel calculator, and then it will calculate the values for you. The file will also read a .spdx file, which is based on ideas IES TM-27-14. which is the IES standard format for the electronic transfer of spectral data. That, so far, has not been a very heavily used file format, but perhaps, now that there is some software that will like using that format, it will be a format that's more generally used by manufacturers to communicate their spectral data. I'll give you one, Mike.

OK, I'm trying to pick these out and read them. It's difficult. Is it possible to define Rf and Rg before starting a design? That's certainly possible. Just as you would with CRI, you can use the same metrics to create specifications for a design. I'd also encourage looking at the vector plot, as another question pointed out, and I think we also pointed out, that those average values can be somewhat misleading, because you don't know exactly what's happening.

So they're a good first pass to me in determining, do I want increased saturation, decreased saturation, or neutral? How close do I need it to be to perfect fidelity? But again, looking at some of these higher order indices, it's pretty important if color is really critical to your application. Now, if it's just an office, maybe it's not as critical to look at those higher order indices. That was a relatively quick answer. I'll take another one here, so that Kevin can find the one he wants to answer. How quickly do you see commercially available spectroradiometers offer Rf Rg in vector diagrams? Any commitments?

Also, availability of tools, a lot of questions about that. I know there are people working on this now. We've been asked for code and another information for months now, actually. The people are working on programming these into commercially available spectroradiometers, handheld things, other programs. We're providing one Excel tool. We're not computer programmers. I can promise you that there's going to be a programmer that can come along and write a much better program to do this. We're providing the tools really as a reference. The original calculation, we verified everything. It matches perfectly with the written standard, and so, others can use that in the future, as they're developing their code. I do think those will come along in relatively short order. I would think a matter of months here. I would expect by the end of the year, probably, to see some of those on the market. Kevin?

Following up on that question, another question is, is there an optimizer tool in the Excel spreadsheet? Can the tool design a spectrum, or a power balance between a given spectra, to optimize Rg with constraints on Rf? The short answer to that question is no. The Excel spreadsheet that's being provided by IES is a calculator, not an optimizer. But that said I already know of optimizers – I know of two optimizers that have already been developed, and are being used,. For example, in some of the slides that we showed in this presentation that showed Rf and Rg in the color vector graphics, those particular spectra were based on an 11 channel optimization that was done within Excel, using the solver tool of Excel. So those will have to be written by others.

OK, I got a couple questions I think I can answer. There's a lot of questions about the experiments, and again, that was really preliminary results, and it was put in there at the last minute. You'll see a lot more coming out on that in the next few months, some of my other presentations. What with the profile of the participants in the study, we had 28 participants. They ranged in age from 19 to 65, close to a 50-50 split between male and female. The average age around 40. You target these things, but that's a pretty darn close representation of the average working population of this country.

So it was quite a nice sample set there. Again it was kind of a pilot study, so that's a relatively small number of people. I think there's other questions here mentioning expanding that to outdoor lighting or different applications. That is all stuff that I think is going to be going on. Kevin, do you have one? I got another one, if you're still looking.

Yeah, go ahead if you got another one.

What about large, negative Duv shifts with TM-30 How does DUV affect TM-30 scores? This is something that came up in the development process. So there's a chromatic adaptation transformation built in, that essentially, I'm going to say, normalizes the chromaticity of the test and reference source, when they aren't identical to one another. And in the modern color science, this is much, much, much improved over the previous chromatic adaptation transformation that was employed in CRI.

So it does a pretty good job at trying to, essentially, account for any of those differences in chromaticity, when it's calculating numbers. However, if you really want to get into the nitty gritty, if you have a very large, negative DUV or a very large, positive DUV, your theoretical maximum fidelity score might be one or two, or three points lower than 100. So, in all practical senses, that's not really an important consideration, in my opinion. I don't think trying to maximize and get 97, 98, 99 fidelity scores is really where the industry needs to focus its attention. That's sort of a difference between LRC Class A, which was mentioned in this question.

So, Class A is a combination of metrics and design criteria. So they are specifying criteria for chromaticity and color rendering. Now it's perfectly possible, using this method – which I think provides both an improved fidelity measure and approved gamut metric that both work together, as opposed to CRI and GAI, which have different reference illuminants and can't really be used together in the same way. But we could do the same thing with the TM-30 measures, and create criteria for those to create something akin to the Class A specification, in combination with Chromaticity, as well.

OK, let me answer a couple here. What is the rationale for making this scale nonlinear, locked to positive values? And then another kind of related question, is an arithmetic average or root mean square averaging used in the calculation of R sub-f? So, I think both of these questions fall into the category of, we had some decisions that were clear cut, and other decisions that were maybe less obvious and required some judgment calls. So the reason for making the scale nonlinear – in other words, using a logarithmic transform, so that values could not go below zero – was simply because there tended to be misunderstanding, or so we believed, in negative values of CRI, in the CIE system. And so, the logarithmic transform does not affect values above about 30, on the fidelity index.

So if you have a value below 30, then it might actually have an effect on the fidelity index, but truthfully, if you have a fidelity index below 30, you already know that the source has a very poor color fidelity. And the difference between a fidelity index of 26 and 24 is probably not material. And so, the short answer is, we made it locked to positive values, using that logarithmic transform, simply so that it had a scale of zero to 100, and it didn't have those confusing non-negative numbers. Regarding arithmetic average or root mean square averaging, because we have 99 color evaluation samples, we use arithmetic average rather than root mean square average.

OK, I have a good question here. How much of the benefit would be obtained by only adding a gamut metric to CRI? So, if you look at some of the slides there – where Kevin was comparing Rf and Ra – if you look at just a CRI value of 80, I think the Rf scores range from approximately 71 or 72, up to 87. I think it's a 16 or 18 point spread, there in the data. So if you look just at a correlation between Rf and Ra, they're pretty well correlated, actually if you look at an r squared value. But if you look at the spread of the data, you can see that sources are being mischaracterized by CRI, by up to 5, 10 points, something that's very noticeable, and that really affects the way sources are being optimized and designed, where we're not necessarily designing them as best they can be designed.

So, just augmenting CRI with a gamut index, yes, that's going to provide value. We get rid of that limitation of only having a fidelity value. But also, there's no real system in place that works cohesively with the same references, in the same calculation engine, that pairs those two metrics. So, we get all this added benefit of an improved fidelity metric and improved gamut metric, and a whole system that works together as one unit. Kevin, a couple more, I think, we might have time for.

OK, there's a number of questions about access to the spreadsheet and professional development credits, and those types of things. So, let me just say that IES will be issuing professional development credit, or I think they're called lighting education units, for participation in this webinar. As far as accessing the spreadsheet, if you buy TM-30, then you'll have access to the spreadsheet. There's not a mechanism for getting this spreadsheet without purchasing TM-30. I think TM-30 is $35 for IES members, and $50 for non-members.

In addition to purchasing it through the IES web store, in which case you'll be mailed a paper copy, you can also go to Techstreet, and do a search for it. And it is available for immediate PDF download, if you want to purchase it that way. And you have the option of purchasing both the media PDF download, and also a paper copy being mailed to you, if that's your option. Somebody else asked if it was possible to get the color evaluation samples in electronic format. Those are listed in the IES spreadsheet, in electronic format, so that's probably the best way of getting them.

OK, we're now a few minutes over time. I want to thank you all for listening. I see this really has an initial step in this process. A lot of you are hearing about this for the first time, and I encourage a lot of the discussion over the next couple months. I really hope everyone gives this a shot and sees how well it can work. And I hope to eventually be on the path to the point where is CIE considering this. They already are. This has been proposed to the CIE. And we can really make this into an industry consensus, and really put this issue that's been on a burner for at least 25 years now – move it a step forward.

This won't necessarily be the only step forward it ever takes. I Imagine that in 10 or 20 or 30 years, there's going to be new measures around, better computer power, more research, that this will take another step forward. I think what I thank all my committee members for working on is we did a nice job of trying to pull all the advances together, put it in a cohesive document, put it out there as something standardized and formalized, that the industry can now use, decide whether it provides a necessary benefit, and go from there.

So with that, I'll say thank-you, again. Tune in next week for another webinar, for more details on the technical, calculation end of things and stop and see one of us about one of the upcoming live presentations. Thank you.

Thanks, everybody.