The Rebel Tech Newsletter is our safe place to critique data and tech algorithms, processes, and systems. We highlight a recent data article in the news and share resources to help you dig deeper in understand how our digital world operates. DataedX Group helps data educators, scholars and practitioners learn how to make responsible data connections. We help you source remedies and interventions based on the needs of your team or organization.
“Concerns about AI’s potential to facilitate the mass creation and distribution of mis- and disinformation ahead of the 2024 elections abound. Voters and candidates might find themselves battling an onslaught of AI-generated mis- and disinformation. While the Federal Elections Commission is considering regulating the use of AI-generated deepfakes in campaign ads, Congress has yet to take definitive action on regulating AI broadly. Other channels like phone, email, and good old-fashioned snail mail are also potential targets.”
The power being placed in everyone’s hands to generate new content without guardrails was such a reckless move. It feels like everyone was intentionally stranded in the ocean with no rescue mechanisms or prospects. The feeling of hopelessness swept over us – as generative AI happened to us and we had no control in if, how or when to engage with it. Because of this forced evolution, we’ve essentially embraced learning-by-doing theory (“a hands-on, task-oriented, process to education”) and mastered treading these AI waters. But honestly, we’re exhausted of ‘AI innovations’ and ‘AI advancements’. And a break or slow down isn’t coming anytime soon. The fear surrounding AI, e.g., AI doom, has somewhat subsided to only be replaced with growing irritation concerning AI.
The more time that passes, the more we discover new lows of malfeasance. The ease in which AI-generated content can appear and be distributed – from text, audio, images and media – remains disturbing. We don’t discuss enough the juxtaposition of how monetarily cheap it is to generate content while being environmentally costly. We’re regularly confronting our human ability to discern fact from fiction. And this article highlights yet another cause for pause: the 2024 elections across the globe represent just one instance where generative AI will likely elevate chaos and confusion. In fact, AI-generated misinformation is one of the greatest risks we’ll see this year, as noted by the World Economic Forum.
Our exhaustion with all forms of AI content is exacerbated by the tech industry’s insistence that AI will ‘save you time’ or ‘make you more productive’. But does it really save you *that much measurable* time? Do you feel like you’re more productive and can *reliably* off-load important tasks?
Pause and take your time to answer these questions. Nearly 8 out of 10 organizations have adopted or are thinking of adopting AI. Transparent workplace discussions about how you use AI systems, tools and platforms would be really helpful right now. Organizational leadership, managers and team members have been using AI for at least 6 months so the AI honeymoon phase is over. Both the value and pain points of your workplace AI use needs to be curated for your team, division/unit and company as a whole.
Since AI systems, tools and platforms ingest data from humans, I’d suggest we as humans start focusing on collectively extracting outcomes from AI. We should gather outcomes according to specific categories: accurate information, misinformation and disinformation. We need at least these categories because we learn more and better by dissecting what makes information deemed accurate, misinformation misleading and disinformation intentionally deceitful. We need to compare and contrast with a range of examples. Determining your team’s local markers in order to distinguish amongst accurate information, misinformation and disinformation sets you on a path to not be misled by AI-generated misinformation.
Read the Entire Article Here! |
"The law should focus on these fundamentals of tech and stop being distracted by the application of technologies." pg 228 Data Conscience
Get Your Copy of Data Conscience Here! |
Stay Rebel Techie,
Dr. Brandeis
Thanks for subscribing! If you like what you read or use it as a resource, please share the newsletter signup with three friends!
Removing the digital debris swirling on the interwebs. A space you can trust to bring the data readiness, AI literacy and AI adoption realities you need to be an informed and confident leader. We discuss AI in education, responsible AI and data guidance, data/AI governance and more. Commentary is often provided by our CEO, Dr. Brandeis Marshall. Subscribe to Rebel Tech Newsletter!
Tuesday, February 25th IN DATA NEWS Judge Throws Out Facial Recognition Evidence In Murder Case An Ohio judge throws out facial recognition evidence in a murder case, preventing prosecutors from securing a conviction. Clearview AI, a controversial facial recognition software, was used to identify the suspect from surveillance footage. “With no immediate leads, investigators turned to surveillance footage taken six days after the crime. They used Clearview AI, a controversial facial...
Tuesday, February 11th IN DATA NEWS Google Lifts a Ban on Using Its AI for Weapons and Surveillance In 2018, Google published principles barring its AI technology from being used for sensitive purposes, such as weapons and surveillance. Weeks into President Donald Trump’s second term, those guidelines are being overhauled. It’s an open-palm slap in the face to all of us, especially Google’s workforce. They said that AI applications wouldn’t be used for military purposes and actively mitigate...
December 3, 2024 👋🏾 Reader, Wishing you and yours a happy holidays. As the DataedX team settles into our Winter Rest period (now until Jan 6-ish), I wanted to share the mounds of good trouble we've gotten into this year. It has been a year full of learning, teaching and leadership development. We’re steadfastly focused on integrating equity throughout DataOps and AIOps. We believe in making data and AI concepts snackable from the classroom to the boardroom. This means that our society can be...