Data traffic on mobile networks will reach 367 exabytes — that’s 367×10^18 bytes or 367 billion gigabytes — in 2020, up from 44 exabytes in 2015, according to Cisco Systems’ recently released Visual Networking Index Global Mobile Data Traffic Forecast, 2015-2020.
A lot of that is video, which accounted for 55% of all mobile data traffic in 2015, and will account for 75% in 2020, predicts Cisco. But significant contributions to growth are expected from automotive infotainment and mobile networks to support the Internet of Things, including connected cars, said a Cisco Global Technology Policy VP in a company blog on February 3.
Auto manufacturers recognize that in-car electronic technology is now a stronger driver of sales than traditional measures of automotive performance like torque, power, straight-line acceleration, and cornering ability.
Automotive connectivity is here now, of course, in such common systems as GPS navigation and radar detectors that use GPS to recognize fixed sources of x-ray emissions, such as automatic doors, and not count them as “threats.”
But there is much more to come, with the integration of in-carsystems, vehicle-to-vehicle communication, and vehicle-to-mobile-network communication culminating in autonomous vehicles. Although we are beginning to talk (a lot) about autonomous vehicles, non-specialists may not have been giving too much thought to the various levels of vehicle automation. Fortunately, our friends at the U.S. National Highway Traffic Safety Administraton (NHTSA) have.
The NHTSA defines five levels of vehicle automation.
Level 0: No automation
At TU Automotive’s one-day “Consumer Telematics Show,” held in Las Vegas during CES 2016, the members of a panel entitled “Making Autonomous Driving a Reality,” opined that Level 2 is an impressive offering that makes commutes much less tiring. Level 4 is much harder to do, but Level 3 is tricky because it requires the driver to switch from complete uninvolvement to taking full control, perhaps rather quickly (although the NHTSA definition specifies that a reasonable amount of time should be provide for the driver to do this). People tend not to be good at this sort of transition, particularly when they don’t have the ongoing situational awareness a driver — at least an attentive driver — would traditionally have. One panel member suggested that this Level 3 scenario may be unworkabel and that we will have to leap from Level 2 to Level 4.
A final comment from a panel member was “Don’t forget the social component of the human-car interaction.” As the car does more, the driver is likely to anthropomorphize the car. Man-machine communication would be optimized if the car could respond in kind. “Hello, Chevy….” “Hello, Ken. My but you look sporty today.” What kind of holographic avatar should Chevy have to optimize this communication? For that matter, what kind of avatar should I have? I can’t have Chevy seeing me the way I really am.
Posted by Ken Werner, March 29, 2016 2:25 PM
About Ken WernerKenneth I. Werner is the founder and Principal of Nutmeg Consultants, which specializes in the display industry, display technology, display manufacturing, and display applications. He serves as Marketing Consultant for Tannas Electronic Displays (Orange, California) and Senior Analyst for Insight Media. He is a founding co-editor of and regular contributor to Display Daily, and is a regular contributor to HDTVexpert.com and HDTV Magazine. He was the Editor of Information Display Magazine from 1987 to 2005.