The artificial intelligence industry is growing at a rapid rate, with four times the number of active AI startups today as there were in 2000. During this same period, venture capital investment into AI has increased six-fold, along with a 4.5X increase in the number of jobs requiring AI related skills. This represents great news for those of us in the AI industry, or anyone who has an interest in this aspect of technology. With this growth, however, comes the question of ethics and regulation. Artificial Intelligence is perhaps the most promising technological advancement since the internet, but now, and especially going forward, issues surrounding the regulation and proper use of AI will increasingly gain the attention of both lawmakers and the public alike.
As with most emerging technologies, there is a tremendous amount of competition within the AI sector, and this competition isn’t just between different companies, but entire nations, (think US vs China vs Russia). Due to the capabilities of the technology, one cannot expect that all players will be leveraging it for the better good, and this entails regulation. Regulation of artificial intelligence is incredibly complex and will likely require close coordination between state and private sector players. Today we take a look at the latest developments surrounding the regulation of this industry, and what steps can be taken to ensure the technology is not used against the better good. The general consensus is that we have reached a point where the question is no longer should AI be regulated, but rather how.
Take news for example. The idea of “fake news” in its current form is a very recent phenomenon. While propaganda has existed for centuries, recent developments would suggest AI has become the latest tool for the mass creation and distribution of false information in an attempt to influence public opinion on contentious issues. This is in addition to the discriminatory use of AI algorithms by the likes of Amazon, who’s recruitment program was recently discovered to show bias against woman. Instances of problematic AI usage are not difficult to come by, and as a responsible member of the AI community, IMAGR encourages debate about how the industry should progress, and what regulatory measures need to be implemented.
One of the complications for regulating AI, is that if any side voluntarily slows down their R&D efforts, one can be certain countries like China and Russia will step in to fill the void. Since the gene is already out of the bottle, so they say, our best course of action now is not to be paranoid about the technology, (as the upside is significantly greater than any risks), but to devise measures which prevent the weaponization, intrusive, or unethical use of AI. Luckily, we are no the only ones with such a view- prominent businesspeople such as Elon Musk and scientists including the late Stephen Hawking have warned of the need for greater regulation.
Precisely what form this regulation will take is a hotly debated topic, but in general, there are a few common themes. The likes of Olivia J. Erdelyi (School of Economic Disciplines/University of Siegen) and Judy Goldsmith (Department of Computer Science/University of Kentucky) have called for the creation of an “international AI regulatory agency”, which would go on to develop a unified framework for the regulation of AI technologies and inform the development of AI policies around the world. Such a view is shared by Elon Musk, albeit in a more dramatic fashion, whereby he warned of the potential consequences of not acting soon enough. Additionally, Musk has provided funding for a non-profit company called OpenAi, whose job it is to conduct research into better ways of managing the capabilities and uses of AI going forward.
All of this might sound quite scary and negative, but the reality is with such a powerful (and ever improving) tool, rules need to be set to ensure a positive outcome. Companies like IMAGR are actively involved in developing the bleeding edge side of computer vision and machine learning technology, but thankfully our little niche of the AI market does not involve any overly contentious uses of the technology. We’re strongly committed to enabling a more personalized shopping experience, and our usage of AI technology is focused on the ability of computer algorithms to recognize items with greater accuracy than even the human eye can achieve.
There are the doomsdayers, the naysayers, the fear-mongers and the skeptics, but at the end of the day, if AI is used for good, it can truly revolutionize the world and eliminate many of the frustrations we currently experience on a day to day basis. Of course, (as cliché as it may seem), with great powers come great responsibilities, and it is important that we collectively ensure the proper use of AI technology.
Let us know in the comments down below what you think of the AI revolution- we’d love to hear your thoughts. Do you share our cautious optimism, or are you totally for/against the technology?