Ethics and Privacy in AI and Big Data: Implementing Responsible Research and Innovation
IEEE Computer Society Team
Share this on:
The need for proper analysis and use of big data has led to the strengthening of the relationship between big data and Artificial Intelligence (AI). Proof of this can be seen in the comparison between two recent statistics. First, poor data quality and handling cost the US economy more than $3 trillion annually. Second, perhaps partly due to the first statistic, over 90% of businesses see the need to manage data more effectively and are investing in the marriage between big data and AI.
Together, these technologies are driving smart information systems (SIS), and SIS has a profound social impact, particularly in driving governmental policy. For this reason, concerns over privacy, data protection, and the ethical use and development of AI have arisen.
Typically, SIS are technologies that integrate big data and AI to collect data and interact with humans. The data collected is considered “actionable,” and so SIS actually impacts many spheres of human society and governance.
Certain other technologies produce colossal datasets that enable SIS to accomplish its goals. Some examples of these enabling technologies that have a profound social impact daily are social media and the Internet of Things (IoT).
What Are the Ethical Issues With Smart Information Systems?
The ethical concerns that have arisen regarding SIS actually stem from ethical issues that have long existed regarding SIS’s enabling technologies. Information and communication technologies are at the core of the development of data protection and privacy laws. So by extension, big data and AI raise many of the same concerns. Furthermore, the development and use of AI on its own has given rise to additional concerns about possible malicious use and the possibility of its surpassing human intelligence and control.