Rebel Tech Newsletter No 77: A Digital Slap in the Face 😣


Tuesday, February 11th


IN DATA NEWS

Google Lifts a Ban on Using Its AI for Weapons and Surveillance

In 2018, Google published principles barring its AI technology from being used for sensitive purposes, such as weapons and surveillance. Weeks into President Donald Trump’s second term, those guidelines are being overhauled.

It’s an open-palm slap in the face to all of us, especially Google’s workforce.

They said that AI applications wouldn’t be used for military purposes and actively mitigate AI use that could cause harm. And to be honest, the ban was a means to avoid additional federal oversight and regulation.

Disappointing but not surprising. The “AI-is-great” push has been force fed to our society for nearly 2 decades now. The sheer increase in the number of AI tools that flood the zone has been astounding and overwhelming. The psychological influence has worked to incentivize industries to go all-in on AI adoption, without really knowing what these AI services are, how they operate or understand the scale of these services’ impact on their business and workforce.

Yet, here we are. The floodgates are wide open. The swell of unvetted and unregulated AI services will surpass the existing ones soon. Self-proclaimed AI companies and freelancers can build those services, but we don’t have to use them. We’ve had a few good public years of critical AI review detailing the full spectrum of its impact to different communities. As a consequence, we now expect and demand more responsible AI tools. We can’t unlearn AI’s negative impacts. We keep the critical AI lens alive through insightful questions and quality implementations. Here are top-6 questions to have on the ready 😉

Transparency

  • Can users understand how the AI system makes decisions and what factors influence its outputs?
  • What documentation is available regarding the AI’s design and limitations?

Accountability

  • Who is responsible for the actions and consequences of the AI system?
  • What processes are in place to monitor and review AI decisions?

Governance

  • How is the AI being tested and validated to ensure safe and reliable operation?
  • What contingency plans are in place to address potential malfunctions?



Like what you're reading? Find it informative and insightful? You can sponsor the Rebel Tech Newsletter and follow us on LinkedIn.



HAPPENINGS & APPEARANCES

  • [📆 FREE WEBINAR 📆] McGraw-Hill is hosting a book promotion webinar for Mitigating Bias in Machine Learning on February 12th! Drs. Berry and Marshall co-edited this textbook, which was published October 2024. Two of the chapter author contributors will be joining the 60-minute conversation. We’ll discuss ways to incorporate this textbook as part of your syllabus, course topics and classroom discussions. Mitigating Bias in Machine Learning showcases transformative case studies that uncover hidden biases and chart actionable approaches to reducing their harmful impact. Learn more about the webinar HERE.
  • [✅ RESPONSIBLE DATA STRATEGIES ✅] Check out the 10 simple rules for building and maintaining a responsible data science workflow. No matter the stage in your data pipeline, there’s a rule you can practically implement to help mitigate risks. Learn more HERE.
  • [✨DATA COURSE ✨] Available on LinkedIn Learning, you can get a snackable overview or refresher on data modeling by taking Practical Database Design: Implementing Responsible Data Solutions with SQL Querying. Get started HERE.

LAUGHING IS GOOD FOR THE SOUL

Stay Rebel Techie,

Dr. Brandeis

Thanks for subscribing! If you like what you read or use it as a resource, please share the newsletter signup with three friends!

DataedX Group

Removing the digital debris swirling on the interwebs. A space you can trust to bring the data readiness, AI literacy and AI adoption realities you need to be an informed and confident leader. We discuss AI in education, responsible AI and data guidance, data/AI governance and more. Commentary is often provided by our CEO, Dr. Brandeis Marshall. Subscribe to Rebel Tech Newsletter!

Read more from DataedX Group

Tuesday, February 25th IN DATA NEWS Judge Throws Out Facial Recognition Evidence In Murder Case An Ohio judge throws out facial recognition evidence in a murder case, preventing prosecutors from securing a conviction. Clearview AI, a controversial facial recognition software, was used to identify the suspect from surveillance footage. “With no immediate leads, investigators turned to surveillance footage taken six days after the crime. They used Clearview AI, a controversial facial...

December 3, 2024 👋🏾 Reader, Wishing you and yours a happy holidays. As the DataedX team settles into our Winter Rest period (now until Jan 6-ish), I wanted to share the mounds of good trouble we've gotten into this year. It has been a year full of learning, teaching and leadership development. We’re steadfastly focused on integrating equity throughout DataOps and AIOps. We believe in making data and AI concepts snackable from the classroom to the boardroom. This means that our society can be...

October 8th, 2024 The Rebel Tech Newsletter is our safe place to critique data and tech algorithms, processes, and systems. We highlight a recent data article in the news and share resources to help you dig deeper in understand how our digital world operates. DataedX Group helps data educators, scholars and practitioners learn how to make responsible data connections. We help you source remedies and interventions based on the needs of your team or organization. IN DATA NEWS California Gov....