Adrian Holovaty recently wrote about the connection between Google Street View and driverless cars. Towards the end, he wondered whether the decision to record all sensor / locational metadata along with pictures came before the idea of using it to train machine learning algorithms (or vice versa). I’d apply Occam’s razor here – i.e., I’m pretty certain the idea of capturing the data came first.
But what’s more interesting is that it presents a whole new angle on what Project Glass enables: Widespread lifelogging by humans (and not just cars) to cloud-based storage. Add offline speech to text, face recognition, OCR, etc. and you have prosthetic memory.
A tiny step towards “self-driving” humanoids.