By Barry Santini
The glorious tradition of spectacle making began with the pride and skills of the original guild of crystal workers, whose members maintained a healthy respect for the highest standards of craftsmanship. But today, 700 years later, the arrival of online prescription eyewear direct to the consumer has produced a fault line dividing eyecare professionals into two opposing camps: Those who maintain that precise standards compliance represents the highest calling of the spectacle making artisan facing off against a more pragmatic group who sees eyewear made close enough to spec as, well... close enough. The fundamental divide between these two camps centers on the question of whether traditional pass-fail metrics—the tolerances used in adjudicating eyewear to a standard—are ensuring that consumers receive high quality eyewear well matched to their needs. With the traditional gatekeeping role of ECPs greatly diminished or even absent in direct-to-consumer prescription eyewear, the debate begins to center around whether any eyewear standards are still relevant in an online age.
Today, there’s no denying the tremendous variety, low prices and convenience of online shopping. As online eyewear purchases accelerate, the regulatory agencies of the state and federal governments are facing two new questions:
- Is the continued use of manufacturing standards unfairly impeding the development of a new channel of eyewear
- Who is ultimately responsible for ensuring prescription eyewear is safe, will work as intended and meets the buyer expectations: the doctor, the optician or the eyewear maker?
To answer these important questions, let’s begin by reviewing the history and development of eyeglass manufacturing standards. Next, we’ll uncover the complex and more comprehensive backstory behind what makes a pair of glasses truly “first quality.” And along the way, it will be essential to keep an open mind on why slavish devotion to perfect standards compliance may be costing the parties involved—labs, dispensers and consumers—far more time, money and aggravation than is necessary.PRIDE IN PRECISION
In the early 1900s, when the art of scientific lens design began to gain momentum, the desire for precision in all things related to eyeglasses also began to take hold. From frame measurements based on millimeters to trial lenses based on the concept of vertex diopters calibrated to a 1.530 index, to using the Sodium D (Nd) line as a reference wavelength standard, opticians were excited about and willing to embrace finer techniques and methods designed to deliver the best and most accurate specs. In the exam room, doctors began to note that patients couldn’t reliably distinguish lens choices smaller than 0.12D. This became known as the “just noticeable difference” in ophthalmic optics, and the eighth diopter unit went on to become the standard and smallest unit required for precision in ophthalmic optics during the next century. In 1956, the first ANSI Z80.1 committee was born, and their responsibility was to create the first uniform and codified standard for use in manufacturing and testing eyeglasses. CAN, COULD OR SHOULD
The people who created the first ANSI standard were technically focused and accomplished theoreticians in the world of physical and ophthalmic optics. As such, they were primarily concerned with outlining a standard that delivered the finest “first quality” lenses that could be made, with less regard for what could be reasonably manufactured using the prevailing art—what can be made—or consideration whether such precise lenses would deliver a true and beneficial value to the end user, i.e., what should be made.
Looking back over 50 years—the first ANSI lens standard was released in 1964—we see the initial lens power tolerance was set at 0.06D... a fantastical and fanatically-strict manufacturing standard from today’s perspective. With the accepted unit of visual sensitivity set at 0.12D, setting a tolerance to half this value was and still is considered integral to best practices in manufacturing. Further, these power and surface standards were entirely practical because the majority of lenses were made from high quality crown glass material. The perspective missing from these early standards was whether such accuracy and precision was actually beneficial to the visual needs of the eyeglass wearer.EVOLVING STANDARDS
As the first Z80 standard revision was being prepared in the early ’70s, lenses made from PPG’s CR-39 monomer were beginning to penetrate the market. These plastic lenses were a major advance in ophthalmic lens materials, being both lightweight and having an Abbe value very close to glass. But the ultimate surface prediction possible for CR-39 was not as fine. The 1972 standard recognized this and created a dual material standard, doubling the power tolerance allowed for plastic lenses from 0.06D to 0.12D. This is early evidence that standards makers were beginning to make allowances for what can be made. Without this recognition, the development of the CR-39 lens market may have been far different than what we have seen today.
For the third revision of the ANSI standard, the dual standard was eliminated. The larger 0.12D power tolerance was now applied to all materials because there was no clear evidence from plastic lens wearers indicating increased dissatisfaction with their acuity compared to glass lens wearers. And so, starting with this 1979 standard, integrating a less theoretically-based perspective about what should be made began. It is interesting to note that just a few years earlier, the U.S. Army began to study whether soldiers would notice differences in the utility of their vision if the prescribed Rx was refracted in units of 0.50D rather than 0.25D. Their report concluded that using 0.50D prescription increments did not produce “notable complaint or discomfort” from the troops. Over time, other studies would reveal that the human eye could tolerate much greater prescription deviations than 0.12D without marked complaint. As the prescribing model moved to a 0.25D refracting unit and manufacturing adopted a 0.12D tolerance, we see how real world experience was increasingly important in the evolution of ANSI standards.
DON’T ACT LIKE A LAB
The accuracy of eyewear is benchmarked by checking compliance with current ANSI standards. And when the eyewear in question is found to be in compliance with these standards, it is generally branded as correct, accurate and properly made. But using words like accurate and correct to evaluate prescription eyewear is, in my humble opinion, wrong. There, I said it. Wrong is a strong word. But now that I have your attention, let’s step back and look at the whole ANSI standard compliance thing from a different perspective.
Your lab has little choice in following ANSI, when you really stop to think about it. All they essentially have in front of them is the values and measurements from your work order, the lens design selected and the frame involved. They must apply and follow ANSI standards because they have no other information to use in judging its overall quality. But you, my dear dispenser, do.
As the dispenser in charge of caring for the client, you also know the following about the patient:
- Their old Rx
- Their old lens’ design
- Their lens design’s position
- Their old base curve
- Their old lens material
- Their Rx progression: How the Rx has been changing over time.
- Their Rx delta: How much change is there between what they are wearing and what they will be receiving.
- Their expectations and tolerances: What you have learned they adapt well to, as well as what they don’t.
The lab knows none of this. You, the optician, have far more essential information available than the lab does with which to judge whether or not any deviations from compliance with standards will impact the efficacy of the new eyewear.
When another optical judges an outside pair of glasses using ANSI, they’re acting like a lab. When an outside doc judges your eyewear using ANSI, they’re acting like a lab. Not that there is anything wrong with labs. But they have much less information to work with when it comes to judging whether or not a pair of spectacles is best for a patient.
So please, don’t act like a lab. The resources you have to draw on to evaluate or discuss what goes into defining the quality of eyewear are greater than theirs.
ECPs are familiar with the varying needs and expectations of individual clients: One wants the sharpest vision, while another will accept a slight blur at near for the convenience and cost and acceptable utility of using over-the-counter “cheaters.” Who is to say what the ideal Rx or acceptable balance is for any individual? In this regard, patient compliance has long been a useful metric. But a pair that may seem good enough for some tasks may be not be adequate for others, and issues can manifest for the wearer in different ways:
1) Sensitivity to cleanliness—A pair of glasses that seems to function adequately may have its utility impacted by how clean it is, i.e., the contrast of the resulting image. A poor or deteriorated surface or coating or poor care habits can manifest itself as a complaint with the ability to keep the lenses clean.
2) Sensitivity to fit—When glasses don’t fit right, they can slide down, shift position or end up crooked due to a failing hinge or improper initial fit. This situation can transform an otherwise satisfied patient into a fitting monster—one who arrives at your office at least once a week for yet another “tune-up.”
3) Sensitivity to position—Progressive and aspheric lenses are very revealing of inaccurate prescription and poor fitting, even small errors. A patient with the need to tilt their head up, down or side-to-side may indicate an Rx over-plussed at distance, under-plussed at near or lacking in proper correction for astigmatism. This last situation creates “hot spots”—where the correction’s astigmatism error and the residual astigmatism error of the progressive surface either complement or negatively interact with each other.
The three most important visual characteristics that define eyewear quality in the consumer’s eyes are whether their glasses deliver the expected balance of acuity, comfort and utility. Sometimes it’s a zero-sum game: Increasing the distance Rx for a myope for better night driving can rob near utility. Result? Unexpected discomfort. Increasing the cylinder power or tweaking the axis of an astigmatic may improve overall acuity, but some individuals will experience an accompanying side effect of perceptual distortion. Result? Unexpected discomfort. The art of the eyecare professional is discovering where the optimal balance is for each and every individual. Sometimes it can take more than one time at bat to find out where the proper balance lies.
THE LENSOMETER: FRIEND OR FOE?
Everyone worth their salt and claims to be an optician should know how to properly focus and use a lensometer in order to evaluate sphere, cylinder and add, as well as prism and its first cousin, optical center position. But today there are two different types of lensometers available: One based on a tried and true principle first patented by the American Optical Corporation in 1918 and the other a product of the late 20th century electronics and laser revolution—the digital lens meter. More on that later. First, let’s take a look at why the lensometer was invented in the first place.
THE PRINCIPAL OF PLANES
Without getting too technical, as lens design evolved through the use of scientific ray tracing, theoreticians defined reference points in order to make lens design calculations easier to do in multi-element systems. The defining of a lens’ principle planes was one of these. In flat or biconvex ophthalmic lenses, the principle plane was used as a reference from which to measure the lens’ focal length and therefore its dioptric power. The problem with this approach became clear when lenses were thick or used a more meniscus or curved form. With lenses of these shapes, the principle planes end up being located outside of and away from either lens surface. In addition, plus lenses have their planes located off the convex side while minus lenses found them separated from the minus side. All this added up to the effective dioptric power of a lens varying with its shape, sign, power and thickness... not a great recipe for precision in vision with eyeglasses. And so Bausch & Lomb created the concept of vertex power, defining it as the effective dioptric power of a lens measured from its back vertex or rear service. Vertex power eliminated the influence of base curve or thickness on power. And by defining vertex power, the development of the lensometer with its “vertex stop” was possible.
THE POLITICS OF THE REFERENCE WAVELENGTH
Part of determining the dioptric strength of a lens is specifying which wavelength will be used as a reference standard for measuring its light bending power. Remembering the visual spectrum, with red light at 780 nm refracting less than blue light at 410 nm, the question is which wavelength to use. In the 19th century, this subject was discussed, and it was decided to settle upon the Sodium D line, Nd, approximately 587 nm. This yellow wavelength was chosen for two reasons: It exists near the photopic sensitivity of the eye, and it was an easily available and repeatable source—an important consideration when formulating a standard intended to be used around the world.
In the 20th century, the European International Standards Organization, aka the ISO, adopted defining the diopter using the wavelength of the Mercury G Iine, Hg, 546 nm. The ISO members maintained that this green wavelength was more optimal because it more precisely matched the measured photopic sensitivity of the eye. The problems inherent in having two dioptric standards using different reference wavelengths should be obvious. At the time, the United States was under severe pressure to adopt the ISO standard, but it successfully argued that extreme costs and inconvenience of retooling to the ISO standard were creating
an unnecessary hardship without clear and relevant industry or consumer benefits.
But recent research by Dr. Larry Thiobos of Indiana State University has shown that it’s actually the monochromatic wavelength of 572 nm that is exactly what the human eye uses most when focusing the retinal image. With a historical perspective, we can see that the U.S. standard of 587 nm is actually closest to the 572 nm the eye effectively uses.
CALIBRATION AND FINGERPRINT
As any ECP worth their salt will turn to their favorite lens meter when it’s time to tolerance a pair of glasses for ANSI compliance, let’s take a quick look at the two types of instruments in broad use in labs and offices today. With the standard analogue lab vertometer, it has always been important that the operator understood the proper way of adjusting the eyepiece to his or her eye in order to realize the most accurate power readings. What is often overlooked is whether the power drum of the instrument is properly calibrated. Any operator can actually address both potential sources of error using the following method: Set the power drum to 0.00D. Rack out the eyepiece to help eliminate accommodative errors, and turn it slowly inward until the mires just become focused. This method corrects for operator accommodation, eye error and instrument power drum error all at once.
Now let’s turn to the new generation of automated/digital lens meters, where a number of advantages are present:
- Precision can be refined to 0.01D.
- Progressive powers can be more precisely verified.
- The Abbe value of the lens material can be specified, increasing the accuracy over manual models.
- The readings found can be easily printed out for reference.
Remember that precision and accuracy are two different things, and that the expected accuracy of analogue lensmeters is 0.06D and that of automated models is 0.03D—assuming they are optimally calibrated. The only real downside to automated models are the electronics, which are both a primary asset and latent liability. Testing has showed that each electronic instrument can exhibit
a unique electronic fingerprint, which is a soft way of saying that higher powered lenses may read differently between different machines. Nothing to really worry about, but operators should be aware of the issue when measuring higher powered lenses.
LIMITS OF LENSOMETRY
When lenses were simple optical affairs, with spherical surfaces and easy-to-find optical centers, the 6 mm aperture of a typical vertometer was all the area needed to check sphere, cylinder and axis power to acceptable standards of precision and accuracy.
But in today’s world of free-form lenses, this is no longer sufficient. Although manufacturers routinely supply “compensated values” for users of either automated or analogue instruments to use when checking their lenses, there is still no effective way for ECPs to actually verify that expected global surface correction has been delivered for that lens’ design, material and Rx. Free-form lenses are effectively asking ECPs to “take their word” that the lenses in front of them will deliver the promised performance.
COMPENSATED RXS: WHEN THE MATH IS RIGHT BUT THE RX IS NOT
Before any ECP sits down at their favorite lens meter to decide if the job in front of them is out of tolerance, they should remember that the Rx they’re verifying with may not actually be the best Rx for that client’s acuity, comfort and utility. The question facing today’s ECP is “How do you really know the real efficacy of any given Rx?”
The answer is: You don’t. Especially in the case of free-form progressive lenses made using the full suite of position of wear (POW) measurements, if the Rx was prioritized for either acuity, comfort or utility, and lens designers weighted the design for another,
and the lab’s processing resulted in delivered powers favoring yet another, it’s almost impossible for the ECP to say what’s acceptable without far more wearer information (see Sidebar “Don’t Act like a Lab” above) than is contained in a normal work order for spectacles. Therefore, it is always advisable to check every pair with an eye toward the larger perspective of what is relevant to that wearer’s needs and expectations.
The dilemma of deciding when to remake a pair of glasses for failing one or more standards and when to pass a pair because “close enough is close enough” is the constant challenge of the eyecare professional. Situations such as the well-meaning influence of family or friends, less than ideal frame choices, large Rx changes without accompanying complaints, long delivery times due to multiple spoilages or lack of a complete wearer history may require passing an out of spec pair because the prospects seem high that it may have to be remade anyway. Some ECPs feel that no buyer spending their hard-earned cash should be treated like a guinea pig, but eyewear satisfaction is a multifaceted and complicated recipe, and sometimes the informed judgment of an experienced eyecare professional trumps what a uniform standard would require.
For example, a new Rx walks in. The current prescription format, in the absence of prescribed prism, does not clearly communicate whether minor phorias or other binocular vision abnormalities have been assessed or noted. In addition, every examiner brings a certain amount of bias to the Rxs they write, such as the orthogonalizing of astigmatism axes or reduction of astigmatism power in order to reduce perpetual side effects. Why fail a 1.00D cylinder for being off 5 degrees if you’re not sure that the axis is an accurate target value to begin with? The final decision to pass a pair of glasses or return them to the lab is finely nuanced, and no fault can be charged for trying to make the best decision possible as long as it is fully informed.
THE ECP AS STANDARD BEARER
In creating any standard, four goals should be met:
- It must encompass all the products in question.
- It must include the variables specified.
- It must be repeatable.
- It must be contemporary and relevant to the industry.
However, there are often multiple parties with multiple interests that must be addressed. For example, for the lab, reducing the number of control parameters to as few as possible in a standard for manufacturing eyeglasses helps labs keep costs down and reduces delays due to spoilage. For the ECP, having standards that ensure that the eyeglasses dispensed are as close as possible to the intent of the Rx prescribed means ECPs should have happy patients. For the wearer, having an eyeglass standard that ensures their glasses are safe, effective, work as consumers expect and don’t cost more than they have to translates to a satisfactory purchase experience. For each party, it should be clear that having a standard is essential and important. And with the lab, dispenser and consumer all having skin in the game, creating a standard that satisfies all should be easy... but it’s not. Why? Because each party’s interests are often not in complete alignment. This is the best explanation behind why strict devotion to standards compliance may not always be in any one party’s best interest.
Perhaps the best way for ECPs and labs to ensure the consumer receives the best quality eyewear begins by stepping back from the lensometer long enough to refocus your perspective. It is, after all, your perspective that really matters most. Today, success in optical requires a bit less myopia about standards compliance and a lot more farsightedness about seeing the big picture of what really matters to consumers: Getting a smart balance of vision and value for every dollar spent on eyewear. That is arguably the most desirable standard of all.
THE FUTURE OF FRAME STANDARDS
Today, frame standards exist to ensure the following:
1) Safety—to ensure that consumers receive frames made from materials that are safe to place on their skin and are free of any known allergens, carcinogens, poisons and are not flammable.
2) Efficacy—to ensure the wearer's lenses will be retained in daily use and under normal conditions, i.e., they won't easily fall out while worn or when left on the dash of a hot car.
3) Specifications—to ensure that any party fabricating lenses, whether they be prescription or plano, have a common language and vocabulary that defines the dimensional characteristics of frames in a way that eliminates confusion.
4) Logistics—to ensure any entity ordering frames, particularly in today's global manufacturing environment, receives a product that matches both their specifications and expectations.
World frame standards, which today are overseen by a joint committee of ANSI and ISO members, include not only defining frame parameters and vocabulary. They include specifying testing methodology for compliance overseeing authorities. These include organizations within the ophthalmic communities and international customs authorities, who will test compliance using the procedures outlined in the standards or will send suspect products to authorized contract labs certified to do compliance testing.
Frame standards also have to be ready to react to the industry's rush to rediscover traditional natural frame materials, such as wood, shell and bone, and also be ready to handle new materials like recycled vinyl and sintered nylon and titanium found in the exploding world of 3D printed/additive manufacturing technology.
3D HEAD SCANNING AND THE FUTURE OF FRAME STANDARDS
One of the most exciting developments in frame manufacturing today is the arrival of 3D frame printing. Although some companies are exploring this "additive manufacturing" technology as an alternate to traditional frame manufacturing using milling, some future-focused companies are combining the new technology of high-resolution head scanning with both 3D printing and automated milling to produce the world's first frames that are custom made to the wearer's individual facial, nasal, mastoid and head contours. And by marrying this state of the art frame fit with the blossoming market of consumer-designed frame styles, materials and surface patterns, the era of the wearer customized frame is fast becoming a reality. Say goodbye to the endless and fatiguing search for the “perfect” frame, a process consumers have likened to searching for a needle in a haystack, along with the disappointment of discovering the desired style was not available in their chosen color or would not fit them optimally.
Next up, they'll marry virtual try-on (VTO) technology to a high resolution, 3D head scan—yielding the complete suite of measurements, including monocular PDs, pupil heights and the frame's exact position of wear. Now the stage is set for completely predictive frame fit—an advancement that will impact the optical field like no other technology ever has.
And of course, there will be a new frame standard issued to ensure this technology delivers on its promise.
THE NEXT GENERATION OF ADVANCED REFRACTIVE TECHNIQUES
The basics of classic refraction techniques have not essentially changed in over a hundred years. But objective and subjective refractive techniques are about to undergo a historic revolution. Newer analytic technologies and metrics, such as Point Spread Function (PSF), RMS (Root Mean Square) analysis, Higher Order Aberration (HOA) analysis and Corneal Waveform Technology (CWT) being combined within a new state-of-the-art examination recipe that promises to deliver superior refractive results and help with better predicting harmful eye condition. It may also help eyecare professionals understand why some eyeglass Rxs prove perfectly satisfactory while others, unexpectedly, do not. Let's take a quick look at each of these new technologies:
- Point Spread Function (PSF): This is a metric common to testing high-precision optics and describes the difference between the amount of light in the Airy's Disc of an optical system whose performance is only limited by the theory of diffraction, i.e., a "diffraction-limited" and the actual amount of light in the diffraction disc of a point source imaged by the optical system in question. For ophthalmic purposes, this is the retinal image of the human eye. It refers to the bleed or "spread" of the light energy from the Airy Disc's theoretical maximum of 86 percent into one or more of the outer diffraction rings. Less energy in the disc means either less resolution, less contrast or both.
- Root Mean Square (RMS): This is an averaging metric that can measure either variations in the surface curvature or the refractive power of an individual element. It is somewhat analogous to describing the overall refractive power of a sphero-cylindrical surface in spherical equivalent terms.
- Higher Order Aberrations (HOAs): These are the more complex descriptions of how an optical system can suffer image degradation beyond simple prism, sphere and cylinder power errors. The higher the order, the more complex mathematical terms test are necessary to describe the resultant image defocus.
- Corneal Waveform Technology (CVT): This is an advanced method of corneal pachymetry. It uses ultrasound to precisely measure corneal thickness, creating an overall corneal profile that can help predict whether someone disposed or in the beginning stages of glaucoma or kerataconus.
DISCERNING THE VISUAL DIFFERENCE
Like the citizens of Edwin Abbott's late 19th century novel Flatland, just because you can't understand or measure a phenomenon doesn't mean it doesn't exist or matter. In the wheelhouse of today's ECP, measuring the HOA profile of a patient is not yet commonplace occurrence. Yet research today has revealed that individuals with lower high-order-aberration profiles—regardless of refractive error—are far more sensitive to prescription changes and lens designs that may impact their HOA profile in a negative manner. Today we can see how eyes with low HOAs are far more sensitive at discerning visual differences than those with greater HOAs. This might help explain why so many emmetropes and near emmetropes have difficulty adapting to the distance and peripheral areas in their progressive lenses, including state-of-the-art free-form designs.
Contributing editor Barry Santini is a New York State licensed optician based in Seaford, N.Y. He thanks Dick Whitney of Zeiss, Nick Mileti of Luxottica and Scott Balestreri of Bad Ass Optical Lab for their help with this article.