Artificial Intelligence: The power is in the data
June 20, 2018 | GLOBE Forum
Whether you think Artificial Intelligence (AI) poses an ethical dilemma or is a force for social good, as tech giants continue to heavily invest in it, AI is expected to play a very big role in our future. Those who control the data will be the ones who make the most industry headway, says Milind Tambe, Professor of Computer Science at University of Southern California, and Co-Director for Centre of Artificial Intelligence and Society.
What is AI?
There is no official definition. That said, AI could be defined as “the science of making computers do things that require intelligence when done by humans”.
We encounter it daily. When we start typing what we’re looking for into a search engine such as Google, it automatically completes the string and makes suggestions – that’s AI. So are Netflix recommendations and voice recognition in smart phones.
The example of AI that’s getting a lot of attention right now is the driverless car. The computer system in these vehicles uses sensory data collected from the driving scene, such as nearby cars, pedestrians, and road signs to make driving “decisions.” But there are challenges in predicting human behaviour. Tragically, the first reported fatal crash involving a self-driving vehicle and a pedestrian happened in the United States (U.S.) earlier this year. The incident highlights the complexity of the interaction between humans and robot vehicles.
What are the trends telling us about the future of AI?
It’s very clear AI is going to be a massive force in the market. All the tech giants are adopting AI in major ways, they’re heavily investing in it, and the global amount of money spent on AI will be huge. Different countries are setting up AI strategies, and there’s going to be big competition for dominance in the AI space. Currently, intellectual leadership is in the U.S. But we’re seeing events such as the International Joint Conference on Artificial Intelligence (IJCAI) where there are more papers from China than the U.S., which 10 years ago was unthinkable.
AI has the potential to serve as a force for social good, helping us to address many complex societal problems we face today, such as those in public health, public safety and security, education, and conservation, to name just a few. In regards to public safety and security, AI can help optimize the use of limited security resources. It can help generate novel unpredictable patrol patterns that humans find very difficult to generate. We have conducted such work with the US coast guard for example.
What are the key causes for concern?
There are three main concerns about AI. One is the potential for automation to take over “much” of the work currently being done by humans, another is losing control over AI, and the third relates to ethics surrounding the use of AI, such as loss of privacy, and people’s data being used in nefarious ways.
These concerns are legitimate, and they need to be addressed. But at the same time, it’s important for us to continue to push development of AI in ways that will benefit humanity. We need to have a balance and make sure we emphasize the societal benefits of AI.
At GLOBE Forum you discussed the challenges AI needs to overcome to be successful – specifically, improving the AI literacy so that end users and regulators understand and trust the data they are given. Can you dive deeper, and suggest how we overcome this communication challenge?
Being able to explain decisions an AI system has made to an end user is very important. In many cases, we are working with vulnerable communities and populations, and we need to ensure they will not be harmed. Even if the answers provided by AI are correct, if the reasoning behind them is not transparent, they may be rejected.
For example, at GLOBE I spoke about using AI to identify poaching hot spots in vast nature conserves. That work helps determine where rangers patrol within parks that are thousands and thousands of square kilometres in size. With only a few hundred rangers on hand, we need to be able to explain why AI has identified these hot spots, otherwise we can’t justify sending the rangers.
Once the logic is properly explained, the end user may be able to analyze and identify errors in the process. It could be that, unwittingly, bias has crept into the results. Exposing bias can become possible once we understand why and how we came about the solution. So, there are many reasons why being able to explain what we did is very valuable.
There are different ways to do this. One approach is to design a system which only uses easily explainable AI tools, and which doesn’t use techniques or employ methods that are difficult to understand. This allows the explanation to be built into the system, but it can cause performance to degrade. The other approach is to let the AI system develop a solution itself, and then post hoc explain the reasoning through other processes. This approach could be taken in cases where performance is crucial.
How can businesses best leverage AI research?
In many cases the best data sets today are in the hands of industry already. Yes, the tools and techniques are being developed by the academics, but we don’t have access to the same level of data sets that industry has. Partnerships between industry and academia can be very mutually beneficial because more research can be done overall.
Championing partnerships that harness academia is crucial because, really, the power is in the data. Whoever controls the data can make huge industry headway. More complete data sets can also help to expose weaknesses in academic research.
It’s 2050…how has AI changed the world?
That’s a difficult and interesting question. We’ve attempted to predict to 2030 in our AI100 report.
The report is from an AI-100 study group and is titled, “AI 2030 in an Urban North American Setting.” In the report, driverless cars were our prime example of how AI will impact the urban North American population in terms of impactful lifestyle changes. We were thinking it would take at least 15 years before this became a reality, but in the last couple years research has progressed so rapidly that this timeline might be much faster.
It will also depend on where investments end up going. I certainly hope a lot will go towards AI that is focused on social good. Recently, there was a story in the LA Times where our mayor called homelessness in Los Angeles “a humanitarian crisis.” There’s a lot AI could contribute to benefit these low-resource communities, in part by addressing some of the major challenges we have in public health, housing, public safety and security, educational drop-out rates, and so on. I’m hopeful investment will go towards assisting these populations who are already not doing as well, and who may suffer further from loss of jobs because of AI.
That’s a more Utopian view of where AI could take us – so I will end there.