VAYAVISION, an Israeli startup working on perception systems for autonomous vehicles has announced a seed round funding of $8 million, led by Viola Ventures, Mizmaa Ventures, and OurCrowd, with strategic investment from Mitsubishi UFJ Capital and LG Corp. FreightWaves caught up with Ronny Cohen, CEO and co-founder of VAYAVISION, to discuss the technology of perception sensing, and the company’s value proposition in the vertical.
“If you look at the first generation autonomous driving systems, they are based on something called Object Fusion architecture. It means that you would be doing separate processing for each sensor on the vehicle – be it the cameras, LiDAR, or the RADAR,” said Cohen. “None of these sensors are good enough on its own, and even if we try to fuse the detection base, we have got a problem as all the singular detections are not reliable enough.”
The systems of today work by fusing the data outputs of all the sensors, and try construing decisions out of it. This is inherently riddled with inaccurate or misleading data points, which leads to non-reliable descriptions of the environment around the vehicle. “There are too many misdetections and false alarms in sensors. What we suggest is raw data detection – give us the raw unprocessed data from all the sensors, and we will make an optimal decision based on all the data,” said Cohen.
This technique called “low-level fusion” would make decisions more reliable. VAYAVISION uses upsampling, a proprietary process of fusing data to improve on the resolution of 3D sensors like the LiDAR and RADAR. “You get raw data fusion with a unique technology that makes ultra-high resolution fusion image, and this makes our detection more reliable and safe,” said Cohen.
VAYAVISION counts on major OEMs and Tier 1 companies as its clients. Cohen explained that the startup approaches auto manufacturers and convinces them to share their most challenging problem with perception systems for them to solve, following which they strike a partnership based on their success with clearing up the issue.
“Sometimes, it’s about us using our technology to improve performance or resolution of their LiDAR and RADAR sensors. In other cases, we are being tested on use cases of street vision,” said Cohen. “For example, a kid lying on the road, or a door open in a car – these are unique use cases, and we show them how our system can handle small unclassified and unidentified obstacles on the road efficiently.”
Cohen contended that the underlying technology behind autonomous driving is more complicated than its being credited with, especially with detecting vehicle behavior on the road. That apart, the high costs of perception hardware systems has been a perennial thorn on the side of OEMs, as the practicality of economies of scale still eludes the manufacturers.
“With VAYAVISION’s upsampling technology, we can reduce the cost of systems, especially on the LiDARs and RADARs. Since we are more efficient, it reduces computation and thus leads to lesser power consumption – which is critical for electric vehicles,” said Cohen. “We also have less communication on the CAN bus, so there would be less costs on transferring the data. Altogether, our solution enables to do more at a lower cost of ownership for the entire autonomous system. I think that is a very important thing – it is not just about performance, but also about affordability and scalability for production.”
With funding secured, VAYAVISION looks to invest the money in productization – taking its core technology from prototyping to a production ready unit. “There are a lot of aspects to completing a fully functional product, with testing and verification, tuning to get a better performance, and improving detection levels. And in that process, we have to meet the safety regulations as well,” said Cohen.