Emerging Regulations to Govern the Landscape of Artificial Intelligence
Keywords:
AI bill of rights, Algorithmic bias, Artificial Intelligence (AI), Automated decision-making, California Consumer Privacy Act (CCPA), Ethics, EU Artificial Intelligence Act (EU AI Act), Federal Trade Commission (FTC), Food and Drug Administration (FDA), General Data Protection Regulation (GDPR), International Organization for Standardization (ISO), Liability frameworks, National Institute of Standards and Technology (NIST), Regulation, Standards, TransparencyAbstract
This research delves into the rapidly changing regulatory environment surrounding Artificial Intelligence (AI), focusing on the ethical challenges posed by its widespread use and the urgent need for comprehensive governance. The study analyses major regulatory frameworks, including the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA), which address automated decision-making and profiling but lack specificity regarding AI technologies. It also examines the European Union's AI Act, which offers a more detailed and stringent approach to AI governance. It also evaluates initiatives by U.S. agencies such as the FDA, FTC, and NIST, each contributing unique perspectives on AI regulation. Furthermore, the research explores standards like ISO 42001, which aim to manage AI systems. By identifying critical gaps such as those related to algorithmic bias, transparency, and accountability, the study highlights the inadequacies of current regulations in fully addressing the complexities of AI technologies. It emphasizes recent regulatory efforts to mitigate AI’s societal impact and calls for collaborative actions among policymakers, industry leaders, and civil society. This collaborative approach is crucial for developing robust regulatory frameworks that protect individual rights while fostering innovation and ethical AI practices.