i think that large language models like chatgpt are effectively a neat trick weāve taught computers to do that just so happen to be *really* helpful as a replacement for search engines; instead of indexing sources with the knowledge youāre interested in finding, it just indexes the knowledge itself. i think that there are a lot of conversations around how we can make information more āaccessibleā (both in terms of accessing paywalled knowledge and that knowledgeās presentation being intentionally obtuse and only easily parseable by other academics), but there are very little actual conversations about how llms could be implemented to easily address both kinds of accessibility.
because there isnāt a profit incentive to do so.
llms (and before them, blockchains - but thatās a separate convo) are just tools; but in the current economic landscape a tool isnāt useful if it canāt make money, so thereās this inverse law of the instrument happening where the owning classās insistence that we only have nails in turn means we only build hammers. any new, hot, technological framework has to either slash costs for businesses by replacing human labor (like automating who sees what ads when and where), or drive a massive consumer adoption craze (like buying crypto or an oculus or an iphone.) with llms, itās an arms race to build tools for businesses to reduce headcount by training base models on hyperspecific knowledge. it also excuses the ethical transgression of training these models on stolen knowledge / stolen art, because when has ethics ever stood in the way of making money?
the other big piece is tech literacy; thereās an incentive for founders and vcs to obscure (or just lie) about what a technology is actually capable of to increase the value of the product. the metaverse could āsupplant the physical world.ā crypto could āsupplant our economic systems.ā now llms are going to āsupplant human labor and intelligence.ā these are enticing stories for the owning class, because it gives them a New Thing that will enable them to own even more. but none of this tech can actually do that shit, which is why the booms around them bust in 6-18 months like clockwork. llms are a perfect implementation of [searleās chinese room](https://plato.stanford.edu/entries/chinese-room/) but sam altman et al *insist* that artificial general intelligence is possible and the upper crust of silicon valley are doing moral panic at each other about how āaiā is either paramount to or catastrophic for human flourishing, *when all it can do is echo back the information that humans have already amassed over the course of the last ~600 years.* but most people (including the people funding the technology and ceo types attempting to adopt it en masse) donāt know how it works under the hood, so itās easy to pilot the ship in whatever direction fulfills a profit incentive because we canāt meaningfully imagine how to use something we donāt effectively understand.