Not to be cliche… but we all knew this was coming. If Google’s AI doesn’t worry you, you’re not paying attention, or you just don’t understand the potential implications.
At present, Google is working on ‘Project Maven’, an AI application that improves the accuracy of drone strikes. The part about being ‘non-offensive’ is just blatant and obvious ‘spin’. Or what I like to call ‘a lie’.
From the article…
“Maven is a well-publicised Department of Defense project and Google is working on one part of it – specifically scoped to be for non-offensive purposes and using open-source object recognition software available to any Google Cloud customer.
“The models are based on unclassified data only. The technology is used to flag images for human review and is intended to save lives and save people from having to do highly tedious work.
“Any military use of machine learning naturally raises valid concerns. We’re actively engaged across the company in a comprehensive discussion of this important topic and also with outside experts, as we continue to develop our policies around the development and use of our machine learning technologies.”
If I need to interpret, Google’s AI recognizes objects and faces in photographs…
If you’ve ever clicked on a Google ‘captcha’ which asked you to identify, a bus or a car, or a street sign or the like in a photograph, you were actively engaged in teaching the artificial intelligence machine, how to do its job.
Maybe you can imagine, how that might be beneficial to a flying bomb in identifying its target.
Here’s the link…