But really, technology is rarely the issue in and of itself, the issue is the system/motivations/logic/power dynamics it operates within. So AI won't save us, no more than the steam engine, electricity, computers, etc. saved us. Not because it can't, but because it won't be allowed to. Sure, as whole, we might benefit from these technologies, but ultimately one group of people is gonna benefit the most, while another group is going to become all the more exploited (a dynamic that is ultimately unsustainable). Any increase in productivity (and therefore value) that technology brings about (especially since the 70s) isn't distributed to labor, but rather used as an excuse to drive down the value of labor and increase the surplus value or profits of capital. Therefore, AI, which could reduce the amount of labor humans have to do (a good thing), is instead (due to the logic of capitalism) used as a way to eliminate jobs, drive down the cost of products, and discipline labor by casting people into precarity (a bad thing). So I guess AI may destroy us, but it's not AI's fault. Capitalism's inherent logic is to blame. But also, I think the abilities of AI are being blown way out of proportion and simply used as the latest bit of speculative fodder to fuel market growth.
Feb 12, 2024

Comments (0)

Make an account to reply.

No comments yet

Related Recs

šŸ¤–
Apologies if this is strongly worded, but I'm pretty passionate about this. In addition to the functions public-facing AI tools have, we have to consider what the goal of AI is for corporations. This is an old clichƩ, but it's a useful one: follow the money. When we see some of the biggest tech companies in the world going all-in on this stuff, alarm bells should be going off. We're seeing a complete buy in by Google, Microsoft, Adobe, and even Meta suddenly pivoted to AI and seems to be quietly abandoning their beloved Metaverse. For decades, the goal of all these companies has always been infinite growth, taking a bigger share of the market, and making a bigger profit. When these are the main motivators, the workforce that carries out the labor supporting an industry is what inevitably suffers. People are told to do more with less, and cuts are made where C-suite executives see fit at the detriment of everyone down the hierarchy. Where AI is unique to other tangible products is that it is an efficiency beast in so many different ways. I have personally seen it affect my job as part of a larger cost-cutting measure. Microsoft's latest IT solutions are designed to automate as much as possible in favor of having actual people carry out typically client-facing tasks. Copy writers/editors inevitably won't be hired if people could instead type a prompt into ChatGPT to spit out a product description. Already, there are so many publications and Substacks that use AI image generators to create attention-grabbing header and link images - before this, an artist could have been paid to create something that might afford them food for the week. All this is to say that we will see a widening discrepancy between the ultra-wealthy and the working class, and the socio-economic structure we're in actively encourages consolidation of power. There are other moral implications with it that I could go on about, but they're kind of subjective. In relation to art, dedicating oneself to a craft often lends itself to fostering a community for support in one's journey, and if we collectively lean on AI more instead of other people, we risk isolating ourselves further in an environment that is already designed to do that. In my opinion, we shouldn't try to co-exist with something that is made to make our physical and emotional work obsolete.
Mar 24, 2024
recommendation image
šŸš©
i think that large language models like chatgpt are effectively a neat trick weā€™ve taught computers to do that just so happen to be *really* helpful as a replacement for search engines; instead of indexing sources with the knowledge youā€™re interested in finding, it just indexes the knowledge itself. i think that there are a lot of conversations around how we can make information more ā€œaccessibleā€ (both in terms of accessing paywalled knowledge and that knowledgeā€™s presentation being intentionally obtuse and only easily parseable by other academics), but there are very little actual conversations about how llms could be implemented to easily address both kinds of accessibility. because there isnā€™t a profit incentive to do so. llms (and before them, blockchains - but thatā€™s a separate convo) are just tools; but in the current economic landscape a tool isnā€™t useful if it canā€™t make money, so thereā€™s this inverse law of the instrument happening where the owning classā€™s insistence that we only have nails in turn means we only build hammers. any new, hot, technological framework has to either slash costs for businesses by replacing human labor (like automating who sees what ads when and where), or drive a massive consumer adoption craze (like buying crypto or an oculus or an iphone.) with llms, itā€™s an arms race to build tools for businesses to reduce headcount by training base models on hyperspecific knowledge. it also excuses the ethical transgression of training these models on stolen knowledge / stolen art, because when has ethics ever stood in the way of making money? the other big piece is tech literacy; thereā€™s an incentive for founders and vcs to obscure (or just lie) about what a technology is actually capable of to increase the value of the product. the metaverse could ā€œsupplant the physical world.ā€ crypto could ā€œsupplant our economic systems.ā€ now llms are going to ā€œsupplant human labor and intelligence.ā€ these are enticing stories for the owning class, because it gives them a New Thing that will enable them to own even more. but none of this tech can actually do that shit, which is why the booms around them bust in 6-18 months like clockwork. llms are a perfect implementation of [searleā€™s chinese room](https://plato.stanford.edu/entries/chinese-room/) but sam altman et al *insist* that artificial general intelligence is possible and the upper crust of silicon valley are doing moral panic at each other about how ā€œaiā€ is either paramount to or catastrophic for human flourishing, *when all it can do is echo back the information that humans have already amassed over the course of the last ~600 years.* but most people (including the people funding the technology and ceo types attempting to adopt it en masse) donā€™t know how it works under the hood, so itā€™s easy to pilot the ship in whatever direction fulfills a profit incentive because we canā€™t meaningfully imagine how to use something we donā€™t effectively understand.
Mar 24, 2024
recommendation image
šŸ˜ƒ
AI is garbage for so many reasons and none of us should be using it. that includes making art as well as seemingly mundane corporate tasks. itā€™s atrocious for the planet, itā€™s horrific for the uncredited workers who labor to power it, it bulldozes all notions of ā€œprivacyā€ and further fast-tracks the commodification of humanity, and itā€˜s fueling the dumpster fire that is an entire generation of brains raised in the cesspool of the internet.
Jan 13, 2025

Top Recs from @ruffianbandwidth

šŸ”Ž
I don't know how well this actually answers your initial question, I think it's more of a counterpoint to some of the stuff people have already said, but here it goes. In the past (prior to social media or search engines) specific styles, specialized knowledge, and niche awareness actually took effort. You had to go out into the world and find a scene, be accepted, participate in it, contribute to it, and learn from others with specific knowledge within the specific sub- or counter-cultural scene. It took time, effort, and experience to craft an identity. Nowadays people cycle through various identities and trends like commodities because it takes no effort (they're sold to them by social media algorithms, influencers, brand accounts, etc.). It comes to you in your phone without you ever even having to leave the house or put in the time to discover it or participate in it (you just follow specific people or subscribe). You can be a passive observer or consumer, not an active contributor. As a result, you're not invested or tied down and committed to that core identity. You can cosplay depending on your mood or who you want to momentarily convey yourself as, because it's easy. Essentially, being a poser has become normalized. An identity is now something to be momentarily consumed and affected, rather than grown, built, and developed over time. Granted, it's always been different in regards to "mass" culture and popular trends (both in the past and now). Those are impossible to miss and were always monopolized by specific trend setting institutions, but always by the time it gets to that point, the actual initial counter- or sub-culture that inspired it has already been coopted and has started to disintegrate under the weight and attention of mass consumption.
Feb 18, 2024
recommendation image
šŸŖ
Oatmeal raisin cookies don't get enough love. As a kid, my palate couldn't appreciate their subtle flavor, but thankfully oatmeal raisin cookies we're rehabilitated for me later in life. I now see the error of my ways, and am trying to evangelize about them, and rehabilitate them for others, by making this recipe. They're great cause they're not too sweet, so they feel appropriate for both dessert and breakfast. They're also like a blank canvas of oaty brown sugar goodness that you can then imbue with whatever add-ins you want (thus turning one recipe into a plethora of variations). My personal favorites are semi-sweet chocolate chips, dried cranberries, and roasted cashews.
Feb 26, 2024