Friday, May 27, 2016

Augmented and Virtual Reality at Display Week: Game On!


In recent years, virtual reality has moved from science fiction movies, to academic research labs, to product development in the industry, and finally into the hands of consumers in the real world. A number of marquee devices have been launched in the market along with some compelling immersive applications. At the same time, some cool augmented reality devices and developer kits have been released as well. The pace of progress in both virtual and augmented reality technologies has been rapid.

So, in line with this fast-emerging trend in the ecosystem, SID decided to create a special track on Augmented and Virtual Reality for Display Week 2016. The rich lineup included a short course, a seminar, a number of invited and contributed presentations in the symposium, and demonstrations on the exhibit floor.

It is just what the display industry needed to be on the verge of a massive rejuvenation!

Displays are the face of some of the most used electronic devices in our daily lives – such as the smartphone, tablet, laptop, monitor, and TV, among numerous other examples. As such, the health of the display industry rises and falls with the growth and saturation of these devices. Take the exciting phase of innovation in LCD TV technology as an example. The screen size went from 24 in. to 32 in. to 40 in. to 55 in. to 80 in. and above! The resolution went from 720p to full HD to QHD and beyond, whereas the frame rates went from 60 to 120 frames per second. And there were many more advances – contrast, brightness, color, etc. However, it gets to a point where further advances in display technology provide only small incremental benefits to the consumer. This often leads to a reduced demand for new features and a slowdown in the development.

Let’s now turn to virtual reality. It’s a completely different story at the moment. The displays on the best, state-of-the-art, VR devices today fall way short of the specifications required for truly immersive and responsive experiences, despite the dizzying pace of development. The pixel density needs to increase significantly and latencies must be reduced drastically, along with many other improvements such as increased field of view, reduced pixel persistence, higher frame-rates, etc. Besides the display, the systems also require integration of accurate sensing and tracking technologies. Augmented reality devices impose additional requirements.

So this is exciting for the researchers and engineers in the industry. Back to solving some difficult challenges, with the potential for big returns. Judging by the excellent quality of the papers, presentations, and exhibitions at the Display Week, it’s obvious the display ecosystem is all geared up. Game on! – Achin Bhowmik

Game Changer: CLEARink Shows Video-Rate Reflective Display


There are good very-low-power monochrome reflective displays with slow redraw times and, with the introduction of E Ink's color display, there is now a good low-power color reflective display with very slow redraw times.

What we have not had is a reflective video-rate display, and for good reasons.  The only reflective technology that has proved to have both broad application and business feasibility has been electrophoretic  (think E Ink), and electrophoretic displays operate by moving charged particles slowly through a significant fluid layer.  The redraw time cannot be fast.  (Well, it can be faster, but then the charged particles collide violently and tear each apart, with unfortunate results.)

CLEARink has turned the conventional electrophoretic model on its head.  Very very briefly, the CLEARink display has a thin optical plate with lenslets on the inner surface.  In the white state, incoming light experiences total internal reflection (TIR) and returns to the viewer.  Reflectivity is an impressive 60%.

How does the display form a black pixel?  Lurking behind the optical plate in an "ink" are black particles that are moved toward or away from the plate.  When the particles touch the plate (that's a bit sloppy, but close enough for a blog), the TIR is defeated and light at that point is absorbed.

Clever, you say, but it's still electrophoresis, with a particle being moved through a fluid.  How can that produce video rate?  Because there's something I haven't mentioned yet.  The particle only has to move through 0. 5 micron to be "touching" or "not touching" the plate, and that very small distance can be traversed rapidly.

All of this has been public for a least several weeks, but at Display Week, the company showed technology demonstrations in its suite.  To demonstrate the monocrome video-rate display, CLEARink engineers had purchased a Kobo eReader, and simply replaced the E Ink imaging film with their own.  With the application of a video signal, the display showed very clean, 30 fps video with the subjectively good contrast and that bright 60% reflectivity.  CEO Frank Christiaens to the opportunity to note that the technology is compatible with pretty much any backplane and requires no precision alignment.

Although my colleague Bob Raikes and I were extremely impressed with this demo, Christiaens didn't want us to neglect that fact that color via matrix color filter is part of the company's mid-term road-map.  Demos were effective.  Using an MCF with an otherwise monochrome EPD has not been a satisfying approach in the past because too much of the reflective light was absorbed.  The difference here is that CLEARink starts out with 60% reflectivity rather than 40%.

So, said Christiaens, CLEARink will soon be providing something that has never before been available:  a reflective color video-rate display.

Walking back to the Moscone Center after our meeting, Raikes and I agreed that the term "game-changing" is used far too often, but that it legitimately applies here.  This fast EPD can enable new applications that cannot be realized by existing display technologies. -- Ken Werner



Wearable Displays Sport Classic Designs

There was a time when watches seemed to go out of fashion. Everyone knew the time by looking at their mobile phone screen. In the last couple of years, “connected watches” have become a wearable part of the mobile ecosystem, as their design has approached that of classic wristwatches. The intuitive round-faced hand dial watch user interface has pulled through, once again.


JDI Memory-in-Pixel reflective connected watch display. (Photos by Jyrki Kimmel.)

How has this development come about? Weren’t we satisfied with the function of the square-screen Android devices that appeared on the market about 5 years ago? Apparently not.

The wearables offering on the exhibition show floor featured many round-faced watch-sized displays. The Withings activity monitor, for instance, was featured in the E Ink booth. It sported a reflective e-paper display, in a round design.



Withings Go activity monitor with 1.1-in. circular, segmented e-paper screen.

Assuming that customer demand drives the adoption of consumer devices, once the technology to realize these is available, we can infer from the exhibits shown that there is a demand to minimize the bezel and dead space in a watch form factor display. Companies are striving to provide a bezelless design similar to what has become possible in mobile phone displays. This is much more difficult using a round shape. AUO showed in two symposium presentations how this can be done using a plastic substrate display. Instead of placing the driver chip on the face of the display, in a ledge, or using a TAB lead, they bend the flexible substrate itself to place the driver at the backside of the display. This way a bezel of 2.2 mm can be achieved, with clever gate driver placement and bringing the power lines into the active area from the opposite side of the display face.

Another direction in the development of wearables is to introduce a band form factor display that wraps around the user’s wrist. Canatu, the Finnish touch panel maker, had an E Ink based display device from Wove on its stand. 


Wove wrist device with Canatu integrated touch system.

The touch panel was assembled in an “on-screen” touch fashion to make a complete, integral structure without any separate outside encapsulation. The whole module thickness is only 0.162 mm, according to the press release.


So, it seems like the technical capabilities in displays are coming to terms to satisfy user needs in wearable devices. With the round-faced and band-shaped form factors making it possible to wear a watch again, the “Internet of Designs” can begin. –Jyrki Kimmel for Information Display

Orbbec Shows New 3D Camera Technology in I-Zone

Orbbec Technology found its way through the rigorous committee selection process and into the I-Zone this year at Display Week. The Shenzhen, China, based company has 3D camera technology that Business Development Manager Agnes Zheng claims offers higher accuracy, lower power, and easier connectivity to more operating systems than the flagship Microsoft Connect II. Zheng has a masters in mechanical engineering specializing in optical measurement. She was part of the group that spun out of a university research project with its IR laser sensor technology that she claims can measure objects at 1 meter with accuracy levels at 1-3 mm in measurement of size and distance to the object.   

"The laser sensor we use has a narrow bandwidth laser light that does not get absorbed by dark surfaces. We designed it in-house and have it specially made for this product," Zheng said. They also added an improved bandwidth filter and improved algorithms, all contributing to the higher accuracy performance. 

The group has support not just for Windows,(it's Windows exclusively on the Connect II), but also Android and Unix platform development. It also will sell an OEM module for individual product design projects and integration into multiple devices including LCD- and OLED-based TVs. Power is another advantage over the popular MS Connect II, as the Orbbec 3D camera runs off a standard USB2 connection with 1.8W maximum draw, far lower than the MS 5.0 W requirement. 

The retail version of the product is $150 and requires no power adapter, putting the Orbbec at parity with Connect II when the price of the external power adapter is added on to the $99. 

Microsoft moved to a time of flight (ToF) model in the Connect II while Orbbec uses a unique dot pattern the company designed using the structured light approach. Zheng told us Orbbec has global patents on this technology. 


Meanwhile, back in the I-Zone, users had a blast using motion detection to control the Sony HD flatscreen, playing real-time games and showing off just how accurate 3D gesture recognition can be. -- Steve Sechrist

Thursday, May 26, 2016

Nano Composite Materials Empower New Wave Guide Breakthroughs

Display Week's Best in Show small exhibit winner DigiLens is looking to do no less than change everything about how AR/VR interfaces with humans. 


DigiLens had a killer demo that was one of the most popular at the show -- you got to check out their technology from the back of a BMW motorcycle. 

The group, founded by Dr. Jonathan Waldren, CTO, says its holographic optics based on the new composite material solve the latency issues around eye tracking with the company's switchable Brag Grating approach (as opposed to surface relief grating.) It delivers a ground breaking 40-degree field of view spec (versus 25- to 30-degrees using conventional methods) with an upside potential for 50-degrees by the end of the year, and up to 90-degrees in the future. 


A specially equipped motorcycle helmet served as the interface. 

Waldren showed us his version of the future, in which a person's gaze is constantly being tracked by a non-intrusive AR or VR system, feeding that data to the system at very low latency. "This is early days, think Steve Jobs in the Xerox PARC lab seeing the mouse interface for the first time. 

The nano materials breakthrough "...allows development of holographic system with an 8X higher index," Waldren said. The coming world of AR/VR (augmented and virtual reality) will gain an immeasurable boost from a low latency visual system that both delivers the image and knows exactly where the user is looking. The display not only shows content, but discerns user intent, augmenting and responding in a natural hands-free mode. And just as the mouse empowered a whole new graphic user interface, decades ago, reliable gaze and eye tracking technology has the potential to change everything yet again. -- Steve Sechrist

Internet of Wristbands


The Internet of Things is here, as experienced in the SAP Center in San Jose during Display Week at the pregame lightshow. The spectators were handed out wristbands with LEDs, synchronized via a radio interface. The wristbands also had a motion sensor so they would light up in Sharks teal when shaken about. The audience participation helped the Sharks to a 5-2 win over the Saint Louis Blues! –Jyrki Kimmel for Information Display



Spectators at a San Jose Sharks game create a light show with synchronized, color-changing wristbands.









Sensor Integration Drives Mobile Display Evolution

Whereas large, high-definition displays usually get people's attention, there is a diminutive class of displays that virtually everyone takes for granted. Mobile phones have interactive displays that a vast majority of SID show attendees use daily, without giving them a second thought. The quality of mobile displays today rivals that of TVs and in many parameters exceeds them. Some key trends in mobile displays were highlighted in the SID Symposium keynotes as well as in the introductory talk for the Market Focus Conference on Touch. 

Hiroyuki Oshima of Japan Display Inc. (JDI) gave the conference keynote on mobile displays, highlighting JDI’s strategy to concentrate on core technologies. One of these is an in-cell touch-based user interface. Other core technologies from JDI include LTPS and IPS, which support the touch functionality that will take on new capabilities. JDI sees the future growth for display business in new applications as the mobile phone market saturates.


A mobile display from JDI (photos by Jyrki Kimmel). 

Calvin Hsieh from IHS gave the lead presentation in the Market Conference on Touch. In the IHS forecast, in-cell touch for AMLCD and on-cell touch for AMOLED play a large role, shown in projected growth for these technologies. For touch in general as well, new applications drive the growth of the business.

What new applications are there for mobile displays and for touch technologies? A lot of these rely on sensors that are being integrated into the module itself. These sensors give the mobile display capability for multimodal user interaction, from fingerprint and proximity sensing to hover touch. These interaction modalities can then be leveraged over a wide range of application areas, even in automotive use.

Another trend is the proliferation of organic form factors in small and mobile displays. Sharp comes into this area from another direction, taking the form language from its automotive curve-edged displays and transforming mobile-sized displays from rectangular to round and oval-shaped objects. 


Sharp has been specializing in "non-traditional" display shapes.

These new form factors, combined with curved display integration, led by Samsung, open a way for totally new device classes, beyond the mobile phone and rectangular, passive information screens in cars.


Samsung is also experimenting with some interesting form factors. 

From the presentations in the conferences and modules shown in the exhibition booths, it seems that the predicted curved and flexible displays are still as far in the future as roadmaps depicted a few years ago. Prototypes of “rollable” screens are becoming ubiquitous but real products are still beyond the horizon. Until we get there, there will be many advancements in “classic” mobile display technologies that in turn can propagate to other application areas, making developments in mobile displays the vanguard of evolution in display technology. Sensor and system integration as well as touch user interface evolution will play a major role as constituent technologies in this development. –Jyrki Kimmel for Information Display


E Ink Shows a Color Electrophoretic Display that Pops


The E Ink Carta reflective electrophoretic display (EPD) is a near-perfect device for reading black text on a white background.  But there are applications, such as many kinds of signage, that demand vibrant color.  Until now, the only way to get "full" color from an EPD -- at least the only way that E Ink has shown us -- is placing a matrix color filter in front of the monochrome display.

E Ink's full-color electrophoretic display with four colors of particle and no matrix color filter.  (Photo:  Ken Werner)

The problem with this approach for a reflective display is that the 40% of light reflected from a good EPD is brought down to 10-15% by the filter. This results in a limited gamut of rather dark, muddy colors.  E Ink showed the way forward a few years ago with a black, white, and red display, which managed to control particles of three different colors using differences in mobility and a cleverly designed controlling waveform.

At Display Week 2016, E Ink introduced an impressive expansion of this approach, in which particles of four different colors are included within each microcapsule, given different mobilities through different sizing, and driven with a pulsed controlling wave movement that permits the creation of thousands of colors, as explained by E Ink's Giovanni Mancini. 

How the E Ink display makes 8 essential colors.  (Graphic: E Ink; Photo:  Ken Werner)

The resulting display showed impressively bright and saturated colors and drew crowds.  When a new image was written, the display would flash several times.  It took about 10 seconds for a new image to build to its final colors.  One possible application Mancini mentioned is a color E Ink sign powered by photocells.

This is a significant development that will definitely expand the range of applications EPD can address. – Ken Werner


Wednesday, May 25, 2016

“Deep” Visual Understanding from “Deep” Learning

Earlier this year, in March, something very significant happened in the history of artificial intelligence. A computer program, AlphaGo, developed by Google DeepMind, defeated the South Korean professional Go player, Lee Sedol, in the home turf of the human player. The collective human ego went into a shock. It was unthinkable that a machine could beat the best of human minds in this extremely complex strategy game! The world had hardly recovered from the defeat handed to Gary Kasparov, the formidable world chess champion, by a mere computer, the IBM Deep Blue, in 1997.

Well… we should have seen this coming. The processing power of computers has gone through astronomical advances, thanks to the relentless pursuit of Moore’s law, named after the founder of Intel. Basically, the transistor count in the processor chip has been doubling every two years over the past four decades! In parallel, algorithms for machine learning and artificial intelligence also went through revolutionary leaps, with the invention and enhancements of the convolutional neural network approach. The combination of the advances in these two fields is now enabling previously unthinkable computer vision and machine intelligence capabilities.

Appropriately, SID invited Professor Jitendra Malik of UC Berkeley, a pioneer in the field of computer vision, to present the Luncheon Keynote this year. He started his presentation by showing the picture below. 



Can a computer program be developed that understands the “semantics” of this scene? Facts such as: 1) the lady on the left is walking away with 3 bags, 2) the woman on the right is playing the accordion sitting on the bench by a bag, while 3) the guy in the middle is looking at the woman. Pushing it further, Professor Malik asked if it would then be possible to predict if the guy has the intention to put some money in the tip bag of the woman!

Most would probably say these are impossible tasks for a computer to accomplish. However, Professor Malik walked the audience through the advances in computer vision over the past few decades, and demonstrated how the advent of multi-layer neural network based algorithms have resulted in unprecedented accuracies in semantic visual understanding that would make such tasks possible.


On to “deep” understanding of the visual world from a collection of pixels on an image, with the help of “deep” learning algorithms running on powerful modern computers! The future is intelligent, and the future is already here… --Achin Bhowmik

The Convergence of Human and Display Evolution


Here’s another look at Monday’s short course, “Augmented and Virtual Reality: Towards Life-like Immersive and Interactive Experiences,” given by Intel’s Achin Bhowmik (who also blogs on these pages). Session attendees were treated to a most unexpected discussion that began with the Cambrian Explosion, which Bhowmik explained directly led to the evolution of the human visual system, and the basis of key issues those of us in the display industry need to consider today.



It was interesting to observe the overflow crowd of electrical and computer engineers suddenly confronted with the cold hard fact that biology, based on the distribution of photoreceptors in the human eye (yes, rods and cones), is driving key display requirements. Bhowmik explained that the human fovia consists of only cones (color receptors) and rods. Cones make up the periphery, with far more (orders of magnitude more) rods than cones in that space.  

Resolution and Field of View (FOV) were also discussed, with the assertion that we should be talking about pixels per degree (PPD) rather than PPI (per inch) specifications in HMD applications. Bhowmik said the human eye has an angular resolution of 1/60 of a degree, with each eye horizontal field of view (FOV) at 160 degrees and a vertical FOV of 175 degrees.

What all this portends is that the direction of display development is finally moving beyond speeds and feeds. For significant development to continue, serious consideration needs to be given to how the eye sees images and particularly color. Maybe it’s time to take a refresher course in Bio 101. -- Steve Sechrist  ​




IGZO: More than Displays


 Oxide semiconductors, alongside low-temperature polysilicon, have been the high mobility alternative to amorphous silicon backplanes for a few years now. Flexible display manufacturers in particular seem to have embraced oxide semiconductors for future generations and new form factors. It’s interesting to note that oxide semiconductors have other primary uses besides displays, in large part due to their extremely low leakage current values.

I talked with Johan Bergquist (see photo below) of Semiconductor Energy Laboratory (SEL) about these uses and how they relate to SEL’s C-axis aligned crystalline indium gallium zinc oxide (CAAC IGZO) technology. Within SID, SEL is best known for its many advances in active backplane technologies. Most recently, SEL has demonstrated high-definition CAAC IGZO displays, both on glass and flexible substrates. The president of SEL, Dr. Shunpei Yamazaki, is recognized in many societies, including SID, for this work. Beyond displays, SEL has shown the capability of CAAC IGZO in representative circuits, such as system-on-glass processing units and multi-level memory circuits. In fact, large-scale integration (LSI) applications are some of the most promising uses for IGZO. 

“IGZO doesn’t have the short gate effect that silicon has,” said Bergquist. “This means that reducing design rules does not reduce the mobility, and SEL has in fact demonstrated 30 nm technology in CAAC IGZO.” CAAC IGZO is aligned in the transverse direction, while it is nanocrystalline in the lateral dimensions. Because the mobility is still below that of crystalline silicon, oxide semiconductors are, unlike CPUs and other circuits requiring fast switching, best suited for circuits that can utilize the next-to-nonexistent leakage current, such as nonvolatile memories. These can be used, for instance, as register backup memories and frame buffers. Because the technology is analog in nature, multilevel memories are possible. “SEL’s goal is to make an 8-bit memory, but we are not there quite yet. 4-bit and 5-bit memories have been realized, though,” says Bergquist. “The trick is to use hybrid circuits where silicon and IGZO are used together.”


At the moment, the reliability of oxide semiconductors is not at the same level as that of silicon-based circuits, so memory chips based on oxide logic are not available. There is a lot of promise in IGZO circuits, especially integrating these with oxide semiconductor backplane displays, and future efforts with chip maker partners of SEL will likely see to it that product reliability will reach silicon levels. We may have to wait for a few years before this happens but until then, we can enjoy the benefits of IGZO in fantastic mobile-sized displays, and beyond these, moderate-sized high-pixel-density displays, both on glass and flexible substrates. -- Jyrki Kimmel for Information Display

Tuesday, May 24, 2016

Design Kudos to Merck/EMD

I have no idea what EMD Performance Materials (aka Merck KGaA, Darmstadt, Germany) is showing in its booth this year but I surely love how it looks. The company's redesigned graphics are a mix of art nouveau/sixties/seventies/something in a bright, bright palette. (See below.) The booth really catches the eye and draws you in. Tomorrow, I'm coming to see what's IN the booth, Merck, I promise.
-- Jenny Donelan


A Frog Gets Real

“How do you define real?” asked Achin Bhowmik, kicking off his Monday Seminar, “Augmented and Virtual Reality: Towards Life-Like Immersive and Interactive Experiences.” (He was quoting the character Morpheus from the iconic science fiction movie The Matrix – a pretty irresistible reference when you’re talking about AR and VR.) Bhowmik, who is with the Perceptual Computing Group at Intel, offered a 90-minute whirlwind tour through the past, present, and future of immersive computing, with some pretty entertaining examples of burred lines between “real” and “virtual,” including a video clip of an actual frog trying to zap virtual bugs crawling on a smartphone screen. (The video is instructive as well as cute, particularly at the end. Check it out.)

Bhowmik also described a series of early VR implementations, the most intriguing of which is the Sensorama, developed by Morton Heilig in 1955. This all-mechanical, arcade-style device featured stereoscopic 3D imagery, a tilting seat, and – amazingly -- wind and aroma.  

The discussion then moved to human factors including accommodation/convergence, depth perception, and various physical cues that can cause users to feel discomfort in immersive/interactive situations if not properly addressed by the systems. Word of the day: proprioception -- perception of your limbs.


It was a lively, compelling, fact-filled presentation that left you feeling like you understood the potential of virtual reality in a new way. Not surprisingly, Bhowmik ended on a positive note, describing the brave new, possible world of AR, VR, as well as a concept he describes as “mixed reality.”  Once again he quoted Morpheus: “Unfortunately, no one can be told what the matrix is; you need to see it for yourself.”  Bhowmik urged listeners to do just that and experience some VR/AR applications for themselves. “This is just the beginning here,” he said. “If you haven’t tried VR yet, I urge you to try it.” 

Monday, May 23, 2016

Welcome to Display Week 2016!

Check back here often for posts about the new products and technology begin revealed at Display Week in San Francisco. ID Magazine has four roving reporters and two editors covering the show for you!

The editors
Information Display magazine