Generative artificial intelligence (GenAI) is too complex to regulate entirely—it would be akin to regulating the internet. Although the internet is not “generative” in its functionality, much of the information that flows through it either cannot be validated or is partially incorrect. Efforts to develop regulations for AI have been in progress for years in several parts of the world as lawmakers continue their debate on protecting data privacy, safety, and bias. The effectiveness of these regulations remains in question because technology moves much faster than lawmakers can anticipate or respond to. The release of ChatGPT added GenAI to the mix, emphasizing the need to address even more regulations.
The role of governments is critical in ensuring that technology evolves and grows organically yet safely. In Europe, the EU Artificial Intelligence Act is entering its final phase of adoption. Its intent is to ensure that AI systems are “safe, transparent, traceable, non-discriminatory and environmentally friendly.” Rules have been drafted for providers and users under three risk levels: unacceptable, high, and limited. For GenAI specifically, the act requires the providers to disclose if the content was produced by a GenAI algorithm, to design models to prevent the production of illegal content, and for summaries of copyrighted data used for training to be published.
Many tech providers in the affected space believe the regulations are prohibitive and difficult to comply with, potentially impeding development efforts. On the other hand, China’s national regulations are more detailed and do not use a risk-based approach. Softening the regulations from an earlier draft published in April, China revised the regulations to support technology builders better; they are more stringent for public-facing products with a significant leeway built-in for enterprise-facing products. The guidelines include content controls and censorship, prevention of discrimination, protection of intellectual property (IP), curbing misinformation, and privacy and data protection.
Within the United States, the National AI Initiative Act of 2020 called for the establishment of the National Artificial Intelligence Advisory Committee (NAIAC), which keeps the President and the National AI Initiative Office updated on associated topics. In the recent Blueprint for an AI Bill of Rights, the focus is on the safety and efficacy of AI, protection against algorithmic discrimination, data privacy, notice, and explanation of AI being used and its impact, and an ability to opt out or ask for a human alternative.
Want More Tech News? Subscribe to ComputingEdge Newsletter Today!
While the development of GenAI is important, it is only beneficial if its positive impacts outweighs negative implications. With any new government policy comes added security and the opportunity for regulatory capture, allowing entities with enough finances and influence to help shape the outcomes, whether in legal avenues or not. It is critical to ensure that the interests of technology builders are met without adversely affecting competitors and consumers.
This balance is difficult to achieve without considering the nuances of GenAI. Four key areas impacted by regulatory attempts could suffer more than benefit. Consider the following facts:
Currently, self-regulation is the only procedure in place for many organizations, but it can continue to be the main path forward. Entities are already inclined to abide by guidelines that protect and appease the consumer as part of doing good business. Governments could offer incentives or funding for organizations that do so. Within the development process of GenAI, self-regulatory measures by developers can go a long way in ensuring the entire onus of comprehension doesn’t lie with governmental regulators, thus prohibiting growth.
The true effectiveness of regulating GenAI remains to be seen as the technology continues to expand and be utilized in innovative ways. While some generalized parameters can provide benefits, the downside of putting too many regulations in place too quickly must be taken into consideration. In 2022, the global Gen AI market was estimated at $10.14 billion. It is projected to hit $13 billion in 2023 and $109.37 billion by 2030. With rapid growth in such a short period, regulators cannot afford to waste time addressing these concerns.
Shivani Shukla specializes in operations research, statistics, and AI with several years of experience in academic and industry research. She currently serves as the director of undergraduate programs in business analytics as well as an associate professor in business analytics and IS. For more information, contact sgshukla@usfca.edu.
Disclaimer: The author is completely responsible for the content of this article. The opinions expressed are their own and do not represent IEEE's position nor that of the Computer Society nor its Leadership.