The times the network was least certain were indeed the times it got the depths wrong. The system compared well to existing setups, while also having the ability to estimate its own uncertainty. Also on rt.com Guardian touts op-ed on why AI takeover won’t happen as ‘written by robot,’ but tech-heads smell a human behind the trick The scientists tested their network by training it to judge depths in different parts of an image, similar to how a self-driving car might calculate proximity to a pedestrian or another vehicle. The feature improves on previous safeguards by carrying out its analysis without excessive computing demands. This self-awareness of trustworthiness feature has been dubbed “deep evidential regression,” and it bases its confidence level on the quality of the available data it has to work with. The scientists behind the development say it could save lives, as a system’s level of confidence can be the difference between an autonomous vehicle deciding “it’s all clear to proceed through the intersection” and concluding “it’s probably clear, so stop just in case.” The advance could enhance safety and efficiency in AI-assisted decision making.
0 Comments
Leave a Reply. |