With worldwide spending on smart city technologies projected to reach $135bn by 2021, AI initiatives will need to be carefully thought through by governments and local authorities to avoid potential disasters in the long term.
In his book Apocalyptic AI: Visions of Heaven in Robotics, Artificial Intelligence, and Virtual Reality, Geraci Robert explains that apocalyptic AI defines a genre of popular science books and essays written by researchers in robotics and AI. These professional researchers, some of whom are famous for their technical work, have written these pop science books in which they extrapolate from current research trends to claim that in the first half of the twenty-first century, intelligent machines will populate the earth. By the end of the twenty-first century, machines may be the only form of intelligent life on the planet.
These authors promise that these intelligent machines may well create a paradise for humanity in the short-term, but in the long-term humans will have to settle for uploading their minds into machine bodies to remain a viable life-form.
When you consider the development of technology, the global political unrest and the chilling warnings about global warming, perhaps this isn’t such a far-fetched scenario.
The reality, however, is that people – citizens and consumers – only care about AI if and when it affects their daily lives. If it makes shopping easier then great, if it takes away jobs, then not so great. Often, AI isn’t thought of as one single concept, because it has so many different applications – and therefore, citizens won’t see it as a threat to their lives, but an enabler. The best way of demonstrating this is with smart city initiatives, in which AI plays a huge part. According to IDC’s latest smart cities spending guide, worldwide spending on technologies like AI that make cities ‘smart’ is projected to have reached $80bn in 2018 and will grow to $135bn by 2021.
At GITEX 2018 in Dubai, Renil Paramel, senior partner of Strategy of Things says that his organisation has seen a lot of great use cases of AI in smart city initiatives. In one example, AI is used to detect the number of cars in a multi-level parking lot, but more importantly it is used to detect the air quality and CO2 as cars circling around looking for parking have a huge detrimental impact on this.
“It’s really bad for the air, so what they’ve done is put in technology when a certain level of cars come in which triggers a machine that cleans out the air – this wouldn’t have been possible without AI,” he says.
Another example is the use of computer vision systems that were initially used to detect humans in hospitality queues – to make the customer experience more efficient. Now, according to Paramel, it is being used for public safety reasons.
“If someone is on the run or if an incident has taken place, computer vision can be used to provide the information to a department in a timely fashion,” he states.
From a health perspective, citizens currently rely heavily on doctors but in the future this could all be based on AI, enabled by a number of smart city initiatives.
Apple’s data scientist, Mohammad Shokoohi-Yekta suggests that in the years to come, when we tell future generations that we had to go to doctors to monitor our health “people will laugh”.
Andrey Belozerov, Strategy & Innovation Advisor to CIO of Moscow, agrees, explaining that his city has worked on 2000 images of neutral networks to find cancer symptoms at an early age with less errors than a human.
A more advanced city that uses AI could essentially help to save more lives.
Belozerov, told delegates on the same panel session that the Russian city, which is home to 12 million people, also needed to use AI to be more efficient.
“For example, we had 170,000 cameras in the streets and we tried to have 10,000 people to monitor these cameras wouldn’t have been efficient. Recently, we had the World Cup, and we used AI to detect those people who were banned from coming into the games and we found 20 people automatically by facial recognition,” he explains.
While all of the members of the panel session were adamant that AI for cities was an opportunity rather than an apocalypse scenario, Belozerov suggests that the way that cities use AI has to be very clear, and there also needs to be a lot more research into the long-term effects of using the technology.
“Where AI can harm society and people is a big field for researchers to investigate,” he says.
Amazon Web Services (AWS) CEO Andy Jassy admits that criminals or corporations could use the company’s machine algorithms for bad purposes, although if they violate the company’s terms they wouldn’t be allowed to use those services anymore. One of the big issues at the moment, he says, is the reliability of AI for different tasks.
“If you take computer vision, we give very strong guidance and recommendations on how people should use the technology. If they use it for entertainment apps it is fine to have only 80% level of confidence in [the accuracy] of these, but if you’re using it for law enforcement, we’re strongly saying don’t use it unless you have over 99% confidence."
"If you use it for law enforcement, you should just use it as one input when making decisions that human beings won’t be making,” he says.
Criminal organisations may look to take advantage of sophisticated tools that the likes of AWS produce and try to make money out of government-funded smart city initiatives. Jassy says it is down to governments to control exactly how these machine learning tools should and should not be used.
Bias and security
Another issue with AI confidence is that of bias. Professor Terence Tse, who teaches on the Master in Digital Transformation Management & Leadership at ESCP Europe, suggests that men would have a higher probability of being falsely accused of having the potential to murder because statistically men commit murder more than women. Similar issues could occur with race and in other criminal situations.
“Any prejudice and bias that are contained in the data set will train machines to manifest these issues. Worse yet, bias can be introduced unconsciously or undetected,” he says.
Then there are the cyber security fears of criminals hacking into existing smart city systems. This could mean gaining access to sophisticated controls for smart parking lots – making it easier for criminals to steal cars, stopping the supply of water in a specific area, adjusting energy consumption to dangerous levels, switching city lights off at night – causing a range of accidents. There have been a large number of incidents within the public sector globally, including data breaches and hacks, down to a lack of investment in security and out of date systems; whether or not this would change with new systems in the long-term depends on how seriously a smart city initiatives is taken from the powers that be. Unlike corporations, governments go through big changes on a regular basis, with new leaders undermining previous projects, leaving smart city initiatives potentially vulnerable in the long-term.
In other words, smart city initiatives that use artificial intelligence need to be properly thought through by governments and local authorities – including long-term sustainability, and ways to ensure that initiatives are taken seriously regardless of a change in power. More importantly, security, bias, confidence and accuracy of algorithms, and rules and regulation on AI all need to be put in place in effective national or global AI strategies.
Otherwise, an apocalypse situation that Robert mentions in his book may well become a reality.
‘Eco Tech’: How technology is our planet's last best hope
Are passenger drones about to hit the skies?
To enable comments sign up for a Disqus account and enter your Disqus shortname in the Articulate node settings.