In the future, NCAPs and government entities around the world will be deploying specifications for driver and cabin monitoring for automakers and ratings. Suppliers are developing software and hardware systems for monitoring what goes on inside vehicles.
In the first of our two part special report, Auto Futures discovered that suppliers are learning how when and why drivers are distracted and what safety problems can be avoided.
How Do Cameras See What Drivers Are Doing?
Infrared cameras are used in Eyesight Technologies’ Driver Sense driver monitoring system.
Driver Sense – tracks driver different facial features that are detected. Then uses the data to translate into a logic layer to determine if the driver is distracted, drowsy, looking at the road, or eyes are closed, says Liat Rostock, vice president of marketing, Eyesight Technologies a company that works with tier one supplier Grupo Antolin and automakers.
The system can tell if someone is looking at a phone or smoking a cigarette which can be very dangerous if the vehicle is carrying flammable or explosive material. The infrared camera can see through sunglasses. To know if someone is getting drowsy, the algorithm is based on blink rate and eye openness.
Rostock says Eyesight offers a very advanced high-level accuracy and ecosystem partners for cameras and chipsets. Eyesight originally started working in computer vision in consumer electronics, TV and the mobile phone space with gesture recognition and viewer analytics but now is focused completely on automotive
An added benefit of facial recognition is the Driver Sense can identify pre-enrolled drivers for personalisation of the vehicle.
In the future, Eyesight is working on occupancy monitoring such as detecting a child in a seat, if a person wearing a seatbelt or if a bag left behind. They are also researching DUI detection.
Beyond Wheel Wobbles with Eye-Tracking Like NASA
The first kinds of early distraction systems in vehicles detects when wheels wobble with lane assessment, says, Henrik Lind, chief research officer at Smart Eye. Smart Eye has been working since 1999 on with eye-tracking research for NASA and the military then moved into automotive.
Smart Eye provides data and hardware for automakers. The next level of monitoring coming beyond driver monitoring is cabin sensing.
For 2022-24 the European NCAP requirement is looking at the face for distraction and drowsiness. Additionally, detection of a child in a child seat, says Lind who notes: “We are not only driver monitoring company, we are an interior sensing company.”
Smart Eye can provide the data for OEMs and hardware but is flexible on hardware for multiple platforms and different levels of computing power.
“Camera location is important for drowsiness. It’s better to have the camera lower to see the eye opening to estimate drowsiness. Normally, for drowsiness you have to lower position of the camera,” says Lind. Skin colour doesn’t matter because the system based on near-infrared, skin reflection is the same and the skin colour doesn’t matter.
In the future, Smart Eye is working on RGB-Ir Technology that captures both RGB and IR images, adds Lind.
Why Are More Types of Driver and Cabin Detection Needed?
You’ll be hearing more about child presence detection because it is on the Euro NCAP protocol updates list as well as the ‘HOT CARS’ Act, says Audrey Beaumarchais, senior account manager at Aptiv.
The Helping Overcome Trauma for Children Alone in Rear Seats (HOT CARS) requires the U.S. Department of Transportation (DOT) to set standards for vehicles to be equipped with an alert system to detect the presence of an occupant (e.g., a child or domestic animal) in a rear designated seating position after the vehicle engine is turned off.
“The good thing about radar Is you don’t even know it is there…”
Tracking what drivers and occupants are doing in the car can get expensive. Suppliers are offering ways to reduce costs.
Tier One supplier, Visteon integrated the driver monitoring camera into the instrument cluster using the same processor that drives the cluster for reduce costs. “It is cost-saving because you don’t have a separate camera housing and cable from the camera,” says Babu Kirangi, senior product manager at Visteon.
“No solution is 100% Perfect. However, it’s less expensive to do a single sensor solution,” says Ilya Sloushch, CEO and co-founder, Caaresys. Its technology monitors the driver and passenger through biometrics, detecting the location, health conditions and state of each occupant.
Caaresys uses a small RF radar, that can be placed anywhere in the car. The system detects seat occupancy and monitors the passengers for respiration and heart rate. It basically senses the vital signs through radar, says Sloushch.
Caaresys will be delivering its product to a Japanese OEM next year. Because it is monitoring vital signs, it could detect if there is a child in the seat for airbag features. It will be compatible with Asian and European NCAPs, says Slouchsh, who notes it works with different platforms and will be in a very low-price range.
“The good thing about radar is you don’t even know it is there,” says Slouchsh. Some truckers put gum on their cameras so they cannot be monitored. Radar can be hidden. Caaresys offers its integrated algorithm that operates in both static and driving modes.
A single-camera can be used to monitor many things.
“Carmakers can spend too much on all kinds of sensors inside the cabin. The pressure sensor in the seat, seat belt tension sensors, dedicated driver monitoring cameras the ambient light sensors and anti-intruder sensors. We want to cut the cost in more than half by using just one low-cost sensor,” says Gil Dotan, founder and CMO of Guardian Optical Technologies.
Collecting Data From Thousands of Events
Dotan says the Guardian camera provides high-resolution video and depth maps to detect motion down to 100,000th of an inch.
“If there is a beating heart inside the cabin, we know about it. Guardian can detect presence even without a line of sight. We can estimate the weight of the person so that you can deploy the airbag according to the weight and to the posture of the person. Guardian can provide data for airbag optimization and provide analysis of the grip on steering the wheel for lane-keeping assist, level 3 and 4 autonomous driving.
“What we are seeing from the RFI’s, that the OEMs are asking for one system OMS (Occupant Monitoring System) DMS (Driver Monitoring System) and child present detection. They want it all from a centre mounted position in the overhead console above the review mirror. For us that’s great we’ve been developing this system for the past two years,” muses Dotan.
“It’s one thing to run a driver monitoring algorithm to use on a supercomputer. It’s another to get an algorithm to run on a one- or two-watt chip for around $10 in a system on chip (SoC). Seeing Machines is partnering with Xilinx and programmed it with Seeing Machines’ IP. The hardware and software can be updated over-the-air (OTA).
“Seeing Machines licenses software and IP in chip format and also software for other computer platforms such as Qualcomm”, says Nick DiFiore, SVP and GM Automotive at Seeing Machines.
He notes that Seeing Machines is uniquely positioned with a fleet division with a backend data centre collects hundreds and hundreds of thousands of events. Systems can also detect if the driver is confused from too much information on the instrument cluster.
Seeing Machines’ FOVIO DMS uses a near-infrared camera that measures and analyzes head pose, eyelid movements and eye gaze even through sunglasses.
This data is processed to interpret driver attention state, focus, drowsiness, and impairment levels in real-time.
(To be continued…)