Mojo is a programming language that is as easy to use as Python but with the performance of C++ and Rust. Furthermore, Mojo provides the ability to leverage the entire Python library ecosystem.
Interesting new Python superset from Chris Lattner.
Mojo is a programming language that is as easy to use as Python but with the performance of C++ and Rust. Furthermore, Mojo provides the ability to leverage the entire Python library ecosystem.
Interesting new Python superset from Chris Lattner.
Today, we’re updating Bard with the ability to help people with programming and software development tasks, including code generation, code debugging, and explanation.
I noticed they use highlight.js to colorize syntax, but TypeScript is not colorized even though highlight.js supports it.
OpenAI has a Tokenizer web app to encode text to tokens or count them. Many people use it to count tokens for ChatGPT, however the fact is that it only supports older GPT-3 and Codex models. GPT-3.5 and GPT-4 use a completely different tokenizer, cl100k_base, the canonical encoder for which, tiktoken, is implemented in Rust and available for Python as an extension. However, there’s no web app version of it from OpenAI.
David Duong created a convenient web app called Tiktokenizer which you can use instead.
Run a fast ChatGPT-like model locally on your device.
This combines the LLaMA foundation model with an open reproduction of Stanford Alpaca a fine-tuning of the base model to obey instructions (akin to the RLHF used to train ChatGPT) and a set of modifications to llama.cpp to add a chat interface.
Impressive. It runs fine on my M1 MacBook Air with 8 GB RAM.
Simon Willison wrote a post on how to use it.
In this post, we’ll implement a GPT from scratch in just 60 lines of numpy. We’ll then load the trained GPT-2 model weights released by OpenAI into our implementation and generate some text.
The result is PicoGPT. Very cool. I’m a fan of simple educational implementations.
Andrej Karpathy’s lecture:
We build a Generatively Pretrained Transformer (GPT), following the paper “Attention is All You Need” and OpenAI’s GPT-2 / GPT-3. We talk about connections to ChatGPT, which has taken the world by storm. We watch GitHub Copilot, itself a GPT, help us write a GPT (meta :D!)
A collection of machine learning models in Core ML format.
Today, we are excited to release optimizations to Core ML for Stable Diffusion in macOS 13.1 and iOS 16.2, along with code to get started with deploying to Apple Silicon devices.
The repository contains Python and Swift packages. With the latter, you can add Stable Diffusion functionality to your iOS/Mac apps.
Users report that 50 iterations to generate an image that previously took about 3 minutes now take 30 seconds on M1 Macs.