i think that large language models like chatgpt are effectively a neat trick we’ve taught computers to do that just so happen to be *really* helpful as a replacement for search engines; instead of indexing sources with the knowledge you’re interested in finding, it just indexes the knowledge itself. i think that there are a lot of conversations around how we can make information more “accessible” (both in terms of accessing paywalled knowledge and that knowledge’s presentation being intentionally obtuse and only easily parseable by other academics), but there are very little actual conversations about how llms could be implemented to easily address both kinds of accessibility.
because there isn’t a profit incentive to do so.
llms (and before them, blockchains - but that’s a separate convo) are just tools; but in the current economic landscape a tool isn’t useful if it can’t make money, so there’s this inverse law of the instrument happening where the owning class’s insistence that we only have nails in turn means we only build hammers. any new, hot, technological framework has to either slash costs for businesses by replacing human labor (like automating who sees what ads when and where), or drive a massive consumer adoption craze (like buying crypto or an oculus or an iphone.) with llms, it’s an arms race to build tools for businesses to reduce headcount by training base models on hyperspecific knowledge. it also excuses the ethical transgression of training these models on stolen knowledge / stolen art, because when has ethics ever stood in the way of making money?
the other big piece is tech literacy; there’s an incentive for founders and vcs to obscure (or just lie) about what a technology is actually capable of to increase the value of the product. the metaverse could “supplant the physical world.” crypto could “supplant our economic systems.” now llms are going to “supplant human labor and intelligence.” these are enticing stories for the owning class, because it gives them a New Thing that will enable them to own even more. but none of this tech can actually do that shit, which is why the booms around them bust in 6-18 months like clockwork. llms are a perfect implementation of [searle’s chinese room](https://plato.stanford.edu/entries/chinese-room/) but sam altman et al *insist* that artificial general intelligence is possible and the upper crust of silicon valley are doing moral panic at each other about how “ai” is either paramount to or catastrophic for human flourishing, *when all it can do is echo back the information that humans have already amassed over the course of the last ~600 years.* but most people (including the people funding the technology and ceo types attempting to adopt it en masse) don’t know how it works under the hood, so it’s easy to pilot the ship in whatever direction fulfills a profit incentive because we can’t meaningfully imagine how to use something we don’t effectively understand.