Run a fast ChatGPT-like model locally on your device.
This combines the LLaMA foundation model with an open reproduction of Stanford Alpaca a fine-tuning of the base model to obey instructions (akin to the RLHF used to train ChatGPT) and a set of modifications to llama.cpp to add a chat interface.
# Port of Facebook’s LLaMA model in C/C++
Impressive. It runs fine on my M1 MacBook Air with 8 GB RAM.
Simon Willison wrote a post on how to use it.
# Developing the Bloomberg Terminal
A good talk by Paul Williams describing the internals of the Bloomberg Terminal software: