ARTIFICIAL INTELLIGENCE HAS A LONG WAY TO GO
Google’s algorithm performs billions of functions every day. By crawling web pages and interpreting queries they deliver accurate results across many platforms, Search, Maps, Gmail and many other services. The company has thousands of employees developing these algorithms. These employees are amongst the brightest brains in the World and they are fine-tuning millions of lines of code so that the algorithm is ever-more accurate.
But, taking into consideration the number of times Google’s services are used each and every day, the latest research shows this to be 3.5 billion searches it processes each day, then from time to time things are bound to go wrong.
Earlier this week, Google had an unwelcome reminder of this potential fragility. It emerged that Googles new image storage app. the automatic tagging algorithm for photos, had assigned the “gorilla” tag to pictures of two black friends. When the issue was pointed out, Google apologised profusely, attempted to fix it, and finally removed the tag all together.
This uncomfortable episode speaks to a wider problem when automating information and artificial intelligence. In this case, not only was the software wrong, it was also highly offensive. While automated services will undoubtedly make our lives easier, they are not human and therefore issues such as this will continue to happen. Culture, ethics and offensive content are not something that machines judge well, not yet at least.
Google is often in trouble over similar issues. Earlier this year it had to apologise to the White House when a racist term sent Google Maps users to the seat of the American democratic government. Campaigners have long accused it of allowing inappropriate material onto its YouTube kid’s app.
The company is known for releasing software as early as possible, perhaps before others might deem it ready. This is useful for experimental purposes, but issues like the “gorilla” incident show its risks. For now, perhaps the best policy is to apologise and react as quickly as possible, which to its credit, Google does. But the episode shows how much still needs to be done with artificial intelligence.
In the coming years this will only become a larger point for debate, with Google playing a massive part in software and artificial intelligence design. Should a self-driving car prioritise its passengers’ safety over pedestrians? To what extent should unmanned attack drones risk civilian casualties to destroy a terrorist base? Perhaps by then, these algorithms will be intelligent enough to be trusted with these decisions. But if photo-tagging software isn’t able to correctly identify people, then this seems a long way off.