i focus a lot on ethics and technology in my studies, so AI has been a huge discussion topic in my life lately lol. i took a course where the professor's entire thesis was that AI could bring about the next Industrial Revolution and we must act now. I think AI is super fascinating and could potentially be the next frontier and the invention that might move humanity forward (the last big invention was the iPhone over a decade ago and we haven't really had any disrupting technologies since then). a lot of public facing applications of AI are basically still in their infancy and are extremely error prone. at the same time, AI does seem to be a bit of a buzz word term that gets thrown around by people don't understand it super well (example: people saying 'an AI' when talking about ChatGPT lol) and it can be frustrating to discuss (as someone who studies tech, whenever I mention my major ppl ask about AI lmao). I agree with you that we should not be afraid of AI or its introduction into society and instead focus on its application. AI is human made and cannot function without a human (we should only be worried if AI starts to make its own decisions or develop its own language whenever I think of introducing AI into society I think of that quote that's like computers should never make management decisions because computers cannot be held accountable. I like philosophical ramblings about whether AI can ever be sentient, if humans are playing God, etc, and witnessing the development of AI right before my eyes has made me interested in more scifi novels and movies lol. I really like Yale professor Luciano Floridi. He deals with the ethics of information and talks a lot about AI and its applications. He has a ton of papers and talks and goes on a lot of podcasts.
recommendation image
Mar 24, 2024

Comments (0)

Make an account to reply.

No comments yet

Related Recs

recommendation image
🪞
Shannon Vallor, a virtue ethicist and philosopher, has been studying the ethics of emerging technologies for nearly 20 years. In this book she challenges the simplistic tech optimist and doomer viewpoints of the future of AI technology. She believes that these polarized media narratives act as a distraction from other pressing issues, from the powers that already control us, and from the genuine existential risks of AI. She posits that creating the illusion of AI as an all-powerful godlike force and de-emphasizing the role of human input in its proliferation/development benefits corporate interest, leaving individuals feeling disempowered and as though they are without a choice. Vallor uses the metaphor of the physical properties of mirrors to paint a picture of artificial intelligence as a reflection of human intelligence. She demystifies AI technology, explaining its realistic capabilities and its limitations, and offers a radical path of grassroots resistance that puts us back in the driver‘s seat to reclaim our humanity and shape our future. I linked a one-hour podcast episode where she talks about the ideas she explores in the book. I highly recommend listening at the very least if you‘re interested in hearing her perspective!
Oct 5, 2024
🤖
Apologies if this is strongly worded, but I'm pretty passionate about this. In addition to the functions public-facing AI tools have, we have to consider what the goal of AI is for corporations. This is an old cliché, but it's a useful one: follow the money. When we see some of the biggest tech companies in the world going all-in on this stuff, alarm bells should be going off. We're seeing a complete buy in by Google, Microsoft, Adobe, and even Meta suddenly pivoted to AI and seems to be quietly abandoning their beloved Metaverse. For decades, the goal of all these companies has always been infinite growth, taking a bigger share of the market, and making a bigger profit. When these are the main motivators, the workforce that carries out the labor supporting an industry is what inevitably suffers. People are told to do more with less, and cuts are made where C-suite executives see fit at the detriment of everyone down the hierarchy. Where AI is unique to other tangible products is that it is an efficiency beast in so many different ways. I have personally seen it affect my job as part of a larger cost-cutting measure. Microsoft's latest IT solutions are designed to automate as much as possible in favor of having actual people carry out typically client-facing tasks. Copy writers/editors inevitably won't be hired if people could instead type a prompt into ChatGPT to spit out a product description. Already, there are so many publications and Substacks that use AI image generators to create attention-grabbing header and link images - before this, an artist could have been paid to create something that might afford them food for the week. All this is to say that we will see a widening discrepancy between the ultra-wealthy and the working class, and the socio-economic structure we're in actively encourages consolidation of power. There are other moral implications with it that I could go on about, but they're kind of subjective. In relation to art, dedicating oneself to a craft often lends itself to fostering a community for support in one's journey, and if we collectively lean on AI more instead of other people, we risk isolating ourselves further in an environment that is already designed to do that. In my opinion, we shouldn't try to co-exist with something that is made to make our physical and emotional work obsolete.
Mar 24, 2024
🤖
Boo I hate the outsourcing of labour! Was reading an essay/talk from Stephen Fry and wanted to share “We have long been used to thinking of technology as being ethically neutral, lacking moral valency. The same press can print Shakespeare’s sonnets one day and Hitler’s Mein Kampf the next. The devices are not capable of making decisions, either aesthetic, ethical or political. The NRA likes to say the same thing about guns. Ai however is different. Intelligence is all about decision making. That’s what separates it from automated, mechanically determined outcomes. That’s what separates a river from a canal. A canal must go where we tell it. A river is led by nothing but gravity and if that means flooding a town, tough on the town. Ai’s gravity is its goals. Unsupervised machine learning allows for unsupervised machines — and for the independent agents that flow from them.„

Top Recs from @veggiedumpling

😃
well-written article analyzing cronenburg's films and contrasting them with today's sexual politics and lack of eroticism. passage i enjoyed: In fact, we are not impermeable packages of preformed desires, importing our likes and dislikes around with us from one encounter to the next like papers in a briefcase. An erotic craving is inextricable from the ferment that foams up when oneself is sluiced into another. Not only is it impossible for us to know whether an encounter will be deflating or transformative but we cannot know what sort of metamorphosis will ensue if the sex is as jarring as we can only hope it will be. 
Feb 19, 2024
🦪
and marvel at the progress you've made. im reading my old journal entries and i was actually a mess, but i've grown so much in a year :) always love and appreciate your old self because she's still within you
Apr 2, 2024