October 8th, 2024
|
California Gov. Gavin Newsom vetoes first-in-nation AI safety bill
California’s Gov. Gavin Newsom (on September 29) vetoed a landmark bill aimed at establishing safety measures for large artificial intelligence models. Supporters of the bill said it would have set guardrails around job loss, misinformation, invasions of privacy, and automation bias related to large-scale AI models.
Admittedly, this bill isn’t perfect. The U.S. Congress – both in the House of Representatives and Senate, have proposed federal legislation, which have largely been bipartisan since the late 20-teens. The Blueprint for an AI Bill of Rights was released in October 2022 by the White House’s Office of Science and Technology Policy. No bill has turned into law. Meanwhile, the E.U. has enacted GDPR, Digital Services Act, Digital Market Act and Artificial Intelligence Act into law during the same time period. Makes you go, hmmm 🤔
Now, let’s get back to discussing Cali’s AI happenings. This bill veto reinforces the #AIHype:
1️⃣ Assumes software developers can’t be bad actors. That’s not true. Recall Meta AI’s Galactica from Nov 2022? According to the MIT Technology Review, “Galactica was supposed to help scientists. Instead, it mindlessly spat out biased and incorrect nonsense.”
If SB 1047 was enacted, then developers would more readily take a pause, which would likely lead to preventing another Galactica-level debacle.
2️⃣ Assumes (AI) companies can’t be bad actors. That’s not true either. Consider the stark engagement and operational differences between Dorsey’s Twitter 1.0 and Musk’s Twitter 0.5. Musk’s Twitter 0.5 has removed a number of political dis/misinformation features about 1 year ago. The floodgates of dis/misinformation have opened up.
If SB 1047 was enacted, then companies would be less prone to remove digital safety and security guardrails.
3️⃣ Assumes that AI regulation equals anti-innovation. And another false association disseminated. Tech innovation has persisted through the US’s Section 230, California Consumer Privacy Act and the EU’s General Data Protection Regulation. So the notion that AI regulation stifles innovation is an empty threat.
If SB 1047 was enacted, then the tech industry would be challenged to truly innovate by centering humanity. They should welcome it. It’s a problem that needs fixing and tech people like to problem-solve, no?
Right now, the tech industry is acting pretty cowardly — complaining that they can’t build tech advances unless they deprioritize the human condition.
It seems that those of us in the U.S. will need to look to Colorado’s AI Act for a model for protecting its citizens from algorithmic harms. And for more people-first data/AI governance suggestions, read Chapter 9-11 of Data Conscience. 😉
Read the Entire Article Here! |
"...our society engrosses itself in leveraging, operationalizing, powering, and monetizing AI under misguided notions of increasing profits, enhancing a person’s or a company’s cool factor, and making everyday life easier for the average person." pg 163
Get Your Copy of Data Conscience Here! |
Stay Rebel Techie,
Dr. Brandeis
Thanks for subscribing! If you like what you read or use it as a resource, please share the newsletter signup with three friends!
Removing the digital debris swirling on the interwebs. A space you can trust to bring the data readiness, AI literacy and AI adoption realities you need to be an informed and confident leader. We discuss AI in education, responsible AI and data guidance, data/AI governance and more. Commentary is often provided by our CEO, Dr. Brandeis Marshall. Subscribe to Rebel Tech Newsletter!
Tuesday, February 25th IN DATA NEWS Judge Throws Out Facial Recognition Evidence In Murder Case An Ohio judge throws out facial recognition evidence in a murder case, preventing prosecutors from securing a conviction. Clearview AI, a controversial facial recognition software, was used to identify the suspect from surveillance footage. “With no immediate leads, investigators turned to surveillance footage taken six days after the crime. They used Clearview AI, a controversial facial...
Tuesday, February 11th IN DATA NEWS Google Lifts a Ban on Using Its AI for Weapons and Surveillance In 2018, Google published principles barring its AI technology from being used for sensitive purposes, such as weapons and surveillance. Weeks into President Donald Trump’s second term, those guidelines are being overhauled. It’s an open-palm slap in the face to all of us, especially Google’s workforce. They said that AI applications wouldn’t be used for military purposes and actively mitigate...
December 3, 2024 👋🏾 Reader, Wishing you and yours a happy holidays. As the DataedX team settles into our Winter Rest period (now until Jan 6-ish), I wanted to share the mounds of good trouble we've gotten into this year. It has been a year full of learning, teaching and leadership development. We’re steadfastly focused on integrating equity throughout DataOps and AIOps. We believe in making data and AI concepts snackable from the classroom to the boardroom. This means that our society can be...