Discussion about this post

User's avatar
Ken's avatar

Super nice piece. As someone a little more familiar with information theory, I tend to think of the LLM content generators as "decoders". Where the input prompt is a series of oriented codewords that represent encoded(compressed) bits of prose, which after initial reconstruction is run through a higher level error correction system to match a likely output.

As far as the capability of a LLM, there is a weakly parallel problem in computer science, where there are more subroutines than there are names to call them. Plus we want natural language code words, which limits the code book, limiting the level of compression, and thus the overall capacity of a LLM.

I also liked your larger point that most likely these LLM will just add to the noise floor.

Thank you for taking the time to write this thoughtful take on LLM content generators.

Expand full comment
Lydia Perovic's avatar

Two AI specialists were on Sam Harris podcast the other week and as far as I can gather (lots went over my head), they both thought GI was extremely dumb and incapable; one of them thought it can harm us because it's trash, other that we have nothing to fear because it's trash. https://www.samharris.org/podcasts/making-sense-episodes/312-the-trouble-with-ai This completely contradicts everything that we're reading in the lay commentariat sphere or even specialist media commentariat, which sees great changes afoot due to AI like Chat and ChaiGPT.

I've noticed the first job ad the other day that has the requirement "know how to use ChatGPT and relies on it a lot". Some assistantship position for Misha Gloubermann.

Expand full comment
2 more comments...

No posts