🤖
I do believe artificial intelligence is one of the most (if not the most) important emerging human rights issues of the future, which is why I dedicate a substantial amount of time and energy to learning about it! I always say know thy enemy so if you’re interested in gaining knowledge about AI design to bolster your arguments against it I would recommend reading this draft manuscript excerpt from the Oxford Handbook of Ethics of Artificial Intelligence. Shannon Mattern is a media studies and art history professor at Penn State and in this text she offers an in-depth analysis of the ethical implications and concerns of design automation. The sources cited are also great jumping-off points for further reading and consideration!
Oct 5, 2024

Comments (0)

Make an account to reply.

No comments yet

Related Recs

recommendation image
🪞
Shannon Vallor, a virtue ethicist and philosopher, has been studying the ethics of emerging technologies for nearly 20 years. In this book she challenges the simplistic tech optimist and doomer viewpoints of the future of AI technology. She believes that these polarized media narratives act as a distraction from other pressing issues, from the powers that already control us, and from the genuine existential risks of AI. She posits that creating the illusion of AI as an all-powerful godlike force and de-emphasizing the role of human input in its proliferation/development benefits corporate interest, leaving individuals feeling disempowered and as though they are without a choice. Vallor uses the metaphor of the physical properties of mirrors to paint a picture of artificial intelligence as a reflection of human intelligence. She demystifies AI technology, explaining its realistic capabilities and its limitations, and offers a radical path of grassroots resistance that puts us back in the driver‘s seat to reclaim our humanity and shape our future. I linked a one-hour podcast episode where she talks about the ideas she explores in the book. I highly recommend listening at the very least if you‘re interested in hearing her perspective!
Oct 5, 2024
🤖
Apologies if this is strongly worded, but I'm pretty passionate about this. In addition to the functions public-facing AI tools have, we have to consider what the goal of AI is for corporations. This is an old cliché, but it's a useful one: follow the money. When we see some of the biggest tech companies in the world going all-in on this stuff, alarm bells should be going off. We're seeing a complete buy in by Google, Microsoft, Adobe, and even Meta suddenly pivoted to AI and seems to be quietly abandoning their beloved Metaverse. For decades, the goal of all these companies has always been infinite growth, taking a bigger share of the market, and making a bigger profit. When these are the main motivators, the workforce that carries out the labor supporting an industry is what inevitably suffers. People are told to do more with less, and cuts are made where C-suite executives see fit at the detriment of everyone down the hierarchy. Where AI is unique to other tangible products is that it is an efficiency beast in so many different ways. I have personally seen it affect my job as part of a larger cost-cutting measure. Microsoft's latest IT solutions are designed to automate as much as possible in favor of having actual people carry out typically client-facing tasks. Copy writers/editors inevitably won't be hired if people could instead type a prompt into ChatGPT to spit out a product description. Already, there are so many publications and Substacks that use AI image generators to create attention-grabbing header and link images - before this, an artist could have been paid to create something that might afford them food for the week. All this is to say that we will see a widening discrepancy between the ultra-wealthy and the working class, and the socio-economic structure we're in actively encourages consolidation of power. There are other moral implications with it that I could go on about, but they're kind of subjective. In relation to art, dedicating oneself to a craft often lends itself to fostering a community for support in one's journey, and if we collectively lean on AI more instead of other people, we risk isolating ourselves further in an environment that is already designed to do that. In my opinion, we shouldn't try to co-exist with something that is made to make our physical and emotional work obsolete.
Mar 24, 2024
🤖
Boo I hate the outsourcing of labour! Was reading an essay/talk from Stephen Fry and wanted to share “We have long been used to thinking of technology as being ethically neutral, lacking moral valency. The same press can print Shakespeare’s sonnets one day and Hitler’s Mein Kampf the next. The devices are not capable of making decisions, either aesthetic, ethical or political. The NRA likes to say the same thing about guns. Ai however is different. Intelligence is all about decision making. That’s what separates it from automated, mechanically determined outcomes. That’s what separates a river from a canal. A canal must go where we tell it. A river is led by nothing but gravity and if that means flooding a town, tough on the town. Ai’s gravity is its goals. Unsupervised machine learning allows for unsupervised machines — and for the independent agents that flow from them.„

Top Recs from @taterhole

recommendation image
🧸
My dad teases me about how when I was a little kid, my favorite thing to do when I was on the landline phone with somebody—be it a relative or one of my best friends—was to breathlessly describe the things that were in my bedroom so that they could have a mental picture of everything I loved and chose to surround myself with, and where I sat at that moment in time. Perfectly Imperfect reminds me of that so thanks for always listening and for sharing with me too 💌
Feb 23, 2025
🖐
I’ve been thinking about how much of social media is centered around curating our self-image. When selfies first became popular, they were dismissed as vain and vapid—a critique often rooted in misogyny—but now, the way we craft our online selves feels more like creating monuments. We try to signal our individuality, hoping to be seen and understood, but ironically, I think this widens the gap between how others perceive us and who we really are. Instead of fostering connection, it can invite projection and misinterpretation—preconceived notions, prefab labels, and stereotypes. Worse, individuality has become branded and commodified, reducing our identities to products for others to consume. On most platforms, validation often comes from how well you can curate and present your image—selfies, aesthetic branding, and lifestyle content tend to dominate. High engagement is tied to visibility, not necessarily depth or substance. But I think spaces like PI.FYI show that there’s another way: where connection is built on shared ideas, tastes, and interests rather than surface-level content. It’s refreshing to be part of a community that values thoughts over optics. By sharing so few images of myself, I’ve found that it gives others room to focus on my ideas and voice. When I do share an image, it feels intentional—something that contributes to the story I want to tell rather than defining it. Sharing less allows me to express who I am beyond appearance. For women, especially, sharing less can be a radical act in a world where the default is to objectify ourselves. It resists the pressure to center appearance, focusing instead on what truly matters: our thoughts, voices, and authenticity. I’ve posted a handful of pictures of myself in 2,500 posts because I care more about showing who I am than how I look. In trying to be seen, are we making it harder for others to truly know us? It’s a question worth considering.
Dec 27, 2024