dRISK is a start-up built to map out every edge case that can be experienced by autonomous vehicles (AVs). Edge cases are the countless high-risk scenarios which are individually unlikely, but together make up the bulk of the risk.
On this week’s Mobility Moments, we’re talking to dRISK’s CEO, Chess Stetson, about developing what his company’s believes will be the first true driver’s test for self-driving cars.
Describe your patented technology and what challenges it solves
The complexity of the myriad edge cases faced by AVs necessitates a more flexible approach to data than any normal database can provide. To handle this complexity, dRISK has adopted knowledge graph technology which has been in development for several years (dRISK owns four issued and one pending patent on this technology).
Knowledge graphs are a highly flexible data structure that can encode the full complexity of real-life events necessary to capture transportation edge cases – the huge number of scenarios which are individually unlikely but together comprise all of the vehicular risk. Into this knowledge graph go video data, full-text accident reporting and expert-sourced data, which are then fused into a single, uniform and queriable taxonomy of edge cases.
The knowledge graph can be simplified and rendered as a massive, readily-interpretable map of edge cases usable by regulators.
And it likewise supports our primary offering – an API providing exhaustive test sequences that are applied to AVs in simulation to test and improve both safety and performance, and to recommend the next best test to perform in real life.
What is Ground Truth?
A huge amount of our baseline edge case data come from real life accidents and near misses – the most important ground truth against which to test an autonomous vehicle. dRISK has a very large volume of such edge cases, and this resource grows daily. Importantly, the edge case data we collect are from edge-case-specific sources (intersections, on- and off-ramps, pedestrian crossings, fatality reports), which provides a much more acute set of training and testing data than autonomous vehicle benchmark data sets, which are usually based on the ‘center cases’; experienced during normal driving.
One oft-cited criticism of testing in simulation is the ‘reality gap’ between simulated and real-life testing, but by keeping our edge cases based closely on real-life recordings we are narrowing this gap sufficiently to allow for massively parallel testing on AVs which translates to real-life performance improvement.
How can your technology be utilised to help combat Covid-19?
Part of our data collection involves recording data from London’s JamCams – traffic cameras placed around London. We realised that we could not only use this data resource to monitor transportation edge cases occurring throughout London, but that we could use it to measure how well pedestrians maintained the 2 metre separation necessary to satisfy the social distancing guidelines. We can measure distances between pedestrians and highlight bottlenecks or areas of high risk such as pedestrian crossings or ticket machines in stations, or show a heat map of areas where municipalities or the retail sector should take action to ensure that customers and staff are not able to maintain the 2m social distancing gap and allow them to take action.
Importantly, we never need to share any data that can be used to identify an individual, and even when communicating already-public data we do our best to make sure no individuals are recognisable.
Explain the UK project/consortium/work with TfL
The Innovate UK and CCAV-funded D-RISK consortium is composed of the dRISK company, dRISK.ai Limited, as well as Imperial College London’s Transport Systems Laboratory, Claytex Services Limited, DG Cities and Transport for London. The consortium is developing what we believe will be the first true driver’s test for self-driving cars. This involves the monitoring of edge cases based on real-life data from
TfL’s publicly available JamCams, but this work is quite separate from dRISK.ai’s system for measuring social distancing.
What’s next for dRISK?
Our knowledge graph of edge cases grows daily, allowing us to provide AV developers a comprehensive resource for testing, training and validating AVs. This is our main product offering, which we will be releasing later in July. We are equally excited to be able to play our role in the fight against Covid-19.
dRISK aggregate data from a huge range of sources, including London’s JamCams, and extract driving scenarios to test autonomous vehicles. Here, the common practice of Londoners crossing behind and in front of cars with a green signal is recorded from a camera in OxfordCircus. This scenario would already be difficult for an Autonomous Vehicle trained to stop before a pedestrian in the roadway. But Covid-19 may be adding more complexity in the form of edge cases. In this video, pedestrians are doing a fair job of maintaining 2m separation from each other (note white superimposed 2m radii around pedestrians at 00:10), but this may mean trailing pedestrians spending longer in the roadway than they otherwise might. Data via TfL OpenData.
What will urban mobility look like in 2030?
Despite the ‘wave of hype’ around AVs having passed already, dRISK.ai believes that with the use of our technology AVs will start showing human-level responsiveness and awareness in the next couple of years. By 2030 we believe AVs will be widespread, and will be comfortable, safe and address systemic inaccessibility.
Shared mobility and mobility as a service, along with other modalities of mobility, with a shift away from traditional ownership, may be something that we start to see to help with the congestion on our roads.