Rethinking the AI Game

0
32
A robot hand with the letter AI and a lady justice statue on the wooden table with law books. 3d illustration.

Even with all the intelligence at their disposal, human beings have failed rather sensationally at not making mistakes. This has already been proven quite a few times throughout our history, with each of the testimonies forcing us to look for a defensive cover. To the world’s credit, we’ll solve the stated conundrum in the most fitting way possible once we bring dedicated regulatory bodies into the fold. Having a well-defined authority across each and every area was a game-changer, as it instantly concealed our many flaws, and consequentially, ushered us towards a reality so utopian that we could have never imagined it in our wildest dreams. The stated utopia, however, turned out to be pretty short-lived, and if we get into the reasons, it was all technology’s fault. The moment technology got its layered nature to take over the landscape; it allowed people an unprecedented chance to fulfil their ulterior motives at the expense of others’ well-being. Wait, there’s more. The whole runner will soon start to materialize on a big enough scale to completely overwhelm our governing forces, and as a result, send them back to square one. After spending a long time in the wilderness, though, it seems like the regulatory contingent is finally ready to make a comeback. This has, in fact, grown increasingly evident over the recent past, and a newly-proposed legislation should only help its case moving forward.

The Office of Science and Technology Policy (OSTP) has formally unveiled the highly-anticipated AI Bill of Rights (BoR). According to the official statement, the bill is conceived to “help guide the design, development, and deployment of artificial intelligence (AI) and other automated systems so that they protect the rights of the American public,” Talk about how it will reach the said goal, the legislation is purposed around five core principles i.e. Safe and Effective Systems, Algorithmic Discrimination Protections, Data Privacy, Notice and Explanation, and Human Alternatives, Consideration, and Fallback. Starting from Safe and Effective Systems, the bill advocates for the fact that these automated systems must be developed with consultation from diverse communities, stakeholders, and domain experts to identify concerns, risks, and potential impacts of the system. Furthermore, it calls for them to undergo pre-deployment testing, risk identification and mitigation, and ongoing monitoring. Here, it must display a safe and efficient pattern, mitigation of unsafe outcomes, including those beyond the intended use, and adherence to domain-specific standards. Moving on, in a bid to resolve algorithmic discrimination, the framework encourages proactive equity assessments as part of the system design, while also seeking representative data and protection against proxies for demographic features, therefore ensuring accessibility for people from different groups.

Apart from it, the BoR mandates designers, developers, and deployers to acquire an explicit permission from users in regards to collecting, using, accessing, transferring, and deleting their data. Other principles talk to enhancing communication on the developers’ end, as well as making it easier for people to opt out of automated systems in the favor of human alternatives.

“Automated technologies are driving remarkable innovations and shaping important decisions that impact people’s rights, opportunities, and access. The Blueprint for an AI Bill of Rights is for everyone who interacts daily with these powerful technologies — and every person whose life has been altered by unaccountable algorithms,” said Deputy Director for OSTP’s Science and Society, Dr. Alondra Nelson. “The practices laid out in the Blueprint for an AI Bill of Rights aren’t just aspirational; they are achievable and urgently necessary to build technologies and a society that works for all of us.”

LEAVE A REPLY

Please enter your comment!
Please enter your name here