10 years ago you had to call computational statistics "machine learning" to trick VCs without any technical knowledge to fund you. now you have to call it "AI". both are complete misnomers. can computational statistics be used for evil? yes (mass surveillance, deep fakes, etc.) can it be used for good? yes (improved weather forecasting, earlier disease diagnostics, etc.) on the whole do economic incentives all-but-guarantee it will be used for evil more than good? yes. does any of this have anything to do with cognition/intelligence/sentience/etc? not even a little
Feb 12, 2024

Comments (0)

Make an account to reply.

No comments yet

Related Recs

recommendation image
đźš©
i think that large language models like chatgpt are effectively a neat trick we’ve taught computers to do that just so happen to be *really* helpful as a replacement for search engines; instead of indexing sources with the knowledge you’re interested in finding, it just indexes the knowledge itself. i think that there are a lot of conversations around how we can make information more “accessible” (both in terms of accessing paywalled knowledge and that knowledge’s presentation being intentionally obtuse and only easily parseable by other academics), but there are very little actual conversations about how llms could be implemented to easily address both kinds of accessibility. because there isn’t a profit incentive to do so. llms (and before them, blockchains - but that’s a separate convo) are just tools; but in the current economic landscape a tool isn’t useful if it can’t make money, so there’s this inverse law of the instrument happening where the owning class’s insistence that we only have nails in turn means we only build hammers. any new, hot, technological framework has to either slash costs for businesses by replacing human labor (like automating who sees what ads when and where), or drive a massive consumer adoption craze (like buying crypto or an oculus or an iphone.) with llms, it’s an arms race to build tools for businesses to reduce headcount by training base models on hyperspecific knowledge. it also excuses the ethical transgression of training these models on stolen knowledge / stolen art, because when has ethics ever stood in the way of making money? the other big piece is tech literacy; there’s an incentive for founders and vcs to obscure (or just lie) about what a technology is actually capable of to increase the value of the product. the metaverse could “supplant the physical world.” crypto could “supplant our economic systems.” now llms are going to “supplant human labor and intelligence.” these are enticing stories for the owning class, because it gives them a New Thing that will enable them to own even more. but none of this tech can actually do that shit, which is why the booms around them bust in 6-18 months like clockwork. llms are a perfect implementation of [searle’s chinese room](https://plato.stanford.edu/entries/chinese-room/) but sam altman et al *insist* that artificial general intelligence is possible and the upper crust of silicon valley are doing moral panic at each other about how “ai” is either paramount to or catastrophic for human flourishing, *when all it can do is echo back the information that humans have already amassed over the course of the last ~600 years.* but most people (including the people funding the technology and ceo types attempting to adopt it en masse) don’t know how it works under the hood, so it’s easy to pilot the ship in whatever direction fulfills a profit incentive because we can’t meaningfully imagine how to use something we don’t effectively understand.
Mar 24, 2024
🤖
everything AI knows it learns from the information it has access too. so if it has access to the stuff we put online...and like every depiction of AI in media is that it becomes "sentient" and evil...maybe it will think it's supposed to be evil. like it's gathering inspo.
Feb 12, 2024
🤖
Apologies if this is strongly worded, but I'm pretty passionate about this. In addition to the functions public-facing AI tools have, we have to consider what the goal of AI is for corporations. This is an old cliché, but it's a useful one: follow the money. When we see some of the biggest tech companies in the world going all-in on this stuff, alarm bells should be going off. We're seeing a complete buy in by Google, Microsoft, Adobe, and even Meta suddenly pivoted to AI and seems to be quietly abandoning their beloved Metaverse. For decades, the goal of all these companies has always been infinite growth, taking a bigger share of the market, and making a bigger profit. When these are the main motivators, the workforce that carries out the labor supporting an industry is what inevitably suffers. People are told to do more with less, and cuts are made where C-suite executives see fit at the detriment of everyone down the hierarchy. Where AI is unique to other tangible products is that it is an efficiency beast in so many different ways. I have personally seen it affect my job as part of a larger cost-cutting measure. Microsoft's latest IT solutions are designed to automate as much as possible in favor of having actual people carry out typically client-facing tasks. Copy writers/editors inevitably won't be hired if people could instead type a prompt into ChatGPT to spit out a product description. Already, there are so many publications and Substacks that use AI image generators to create attention-grabbing header and link images - before this, an artist could have been paid to create something that might afford them food for the week. All this is to say that we will see a widening discrepancy between the ultra-wealthy and the working class, and the socio-economic structure we're in actively encourages consolidation of power. There are other moral implications with it that I could go on about, but they're kind of subjective. In relation to art, dedicating oneself to a craft often lends itself to fostering a community for support in one's journey, and if we collectively lean on AI more instead of other people, we risk isolating ourselves further in an environment that is already designed to do that. In my opinion, we shouldn't try to co-exist with something that is made to make our physical and emotional work obsolete.
Mar 24, 2024

Top Recs from @schmichael

recommendation image
🍎
teal compuder
Feb 3, 2024
🏜
running into a painting of a tunnel, grand piano falling from above, running past a cliff and hovering for several seconds before falling, and so on
Jan 28, 2024
đź’°
before buying something that absolutely won't help you make money
Jan 29, 2024