The promise of local LLMs

Ollama makes it really easy to run an LLM on your personal computer. I have one of the original M1 MacBooks Airs with only 8 GB of memory. It works but only barely.

Within a few years, I’m guessing Apple hardware will be optimized for running local LLMs. There’s already talk of it on the iPhone.

Despite not being all into the hype, I see a lot of uses for LLMs: a much better proofreader, helping track down references, sanity checking something, etc.

An LLM powered Siri would be great, especially if it has access to your emails, messages, browsing history, but has the security of never leaving your devices.

The larger lesson is that despite all the conventional wisdom about needing to be first for the hype-driven tech product of the year, reality is rather different. Apple skipped the whole chatGPT thing. Yet they’re well poised to have the best LLMs within a few years.

Patience and being deliberate are rarely praised in the tech scene. They should be. Being the best trumps being first.