Voyaging on Strange Seas of Thought, Not, However, Alone!: Where I Am Finding MAMLMs Useful This Winter
I do not know how to handle the intellectual crisis tsunami now coming down on me from the creation, development, and societal consequences of MAMLMs—Modern Advanced Machine-Learning Models—other than to find a floatation device, hang on for dear life, and start kicking as fast as I can. But I do know some things: As MAMLMs make text-extrusion five times faster, the real crisis moves to reading, filtering, and not going insane. “Prompt whispering” is theater; “context engineering is the work”. Stop treating language models like minds and start treating them like very fancy calculators wired to very large libraries. Your AI is not your friend, therapist, co‑author, or co-pilot; it is a token‑production machine. If you feed it a wise string of tokens, wise tokens will come out. If you feed it tokens from a stupid conversation, stupid tokens will come out…
IMHO, this is right:
Mike Taylor: Why I Turned Off ChatGPT’s Memory <https://every.to/also-true-for-humans/why-i-turned-off-chatgpt-s-memory>: ‘The argument for turning off memory is… I want unbiased results from ChatGPT, based on context that I carefully curated and put in the prompt, so I know how it made its decision. With memory, anything from your past chats could affect the results in ways that are hard to predict…. The memory feature… lead[s] to unexpected and difficult-to-diagnose problems….
Even a throwaway line in your context window can have a big impact on the results you get from AI. These models are trained to be extremely eager to please, and so you need to manage the context you provide them, lest they get distracted, confused, or obsessed with what’s in there, degrading your results…. Context poisoning… distraction… confusion… clash….
Forgetting is a superpower…. Resetting to a clean slate by starting a new chat session (with memory off) is what lets you understand how ChatGPT makes its decisions. You know exactly what context it’s using because it’s only what you pasted into this prompt, not something from weeks or months ago that might be outdated, irrelevant, or wrong. The context you provide is the only variable, which makes it a true experiment—something you could never do with a human employee who remembers (and resents) the last round of testing. Turn on memory, and you lose that control. Your context becomes a compost heap…
The frame that these things are minds with whom we are having conversations is seductive, deeply misleading, and ultimately very destructive. These are not real agents with intelligence, with anything like beliefs, intentions, or continuity of self.
These are natural language interfaces to databases.
These are stochastic calculator-translators over training data,
When you are using them to access a structured, reliable database, YOU WANT THEM AS DUMB AS POSSIBLE: bare linguistic fluency, and nothing more. You want minimal creativity: bullet‑proof parsing, schema adherence, and predictable translations, not flights of rhetorical fancy.
When you are using them the unstructured database that is the internet, YOU HAVE ONE JOB!
That job is to create an input token chain to the MAMLM GPT LLM that it will judge as similar to the token chains somewhere in the training data that contain the reliable information you are looking for.
