Count More

If you’re a student in the United States, you can register to vote either in your home state or your school state.

If one of those happens to be a battleground state in the 2024 presidential election, your vote effectively “counts more” there.

That’s why we built Count More, a tiny website to help students choose:

Screen shot of countmore.us

It’s maybe the (technically) simplest project we’ve worked on at Front Seat but, with the right campaign behind it, we hope it will meaningfully impact the 2024 presidential election.

Angie Wang in the New Yorker: 'Is My Toddler a Stochastic Parrot?'

Angie Wang’s sketch in the New Yorker, “Is My Toddler a Stochastic Parrot?”, is a lovely meditation on language, AI, and life:

The world is racing to develop ever more sophisticated large language models while a small language model unfurls itself in my home.

Seriously, just find a large screen and read it. It’s such a delight.

OpenAI: Consumer & Cloud

Sam Altman, partway into his understated keynote at the first ever OpenAI DevDay:

Even though this is a developer conference, we can’t resist making some improvements to ChatGPT.

Before this moment, I only fuzzily understood that OpenAI operates in two separate but related markets. They have:

  1. A consumer product called ChatGPT. This includes mobile apps, websites, secondary models like DALL·E 3, and integrated tools like the web browser and code interpreter. ChatGPT, used by 100M weekly actives, costs $20/month. Despite Monday being “DevDay”, OpenAI launched major new consumer-facing features. Most notably, it now allows anyone to build custom “GPTs”.

  2. A growing suite of cloud services. Before OpenAI DevDay, I might have referred to this as the “OpenAI API” — a small piece of code that interfaces with OpenAI’s foundation models. Now, this seems too simplistic as the API quickly evolves into multiple value-add (and margin-add!) services like the Assistant API. This is similar to how AWS adds value (and margin) on top of its few core services. OpenAI’s cloud, with 2M registered developers, undoubtedly generates significant revenue.

Finding oneself in two very different product and market segments — suddenly and at scale — is no small feat! Yes, these markets utilize the same underlying technologies. However, their pricing models, sales strategies, and value propositions are quite different and are likely to diverge over time. This isn’t exactly unheard of in the tech industry, but it’s a significant undertaking for a company that’s been around for less than a decade.

Sam Altman, with charming understatement:

About a year ago […] we shipped ChatGPT as a “low-key research preview”… and that went pretty well.

Very few consumer services reach ChatGPT’s scale. In under a year, OpenAI accidentally defined a new consumer product category and became its heavyweight. I don’t think we need to look any further to find the future of consumer chatbots. Especially with the introduction of custom GPTs, it’s hard to see much room for creating differentiated consumer-facing chat services. Instead, developers will need to bring their unique data assets and external compute capabilites to the ChatGPT interface. The Canva demo is a compelling example: Canva is a decacorn but the expectation is that users will still start in ChatGPT.

As for the growth of a new kind of cloud, focused not on virtual compute and object storage but instead on foundation models, we should expect to see many more value-add services in the future. The logic behind this year’s launches seems straightforward: OpenAI simply observed the emerging architectural patterns behind LLM applications and implemented versions that are better and easier to use, readily available from the same vendor that provides the LLM itself. Like AWS, OpenAI is the heavyweight in this emerging cloud segment. While there will always be room for smaller players along competitive axes like model customization and privacy, and I hope we see plenty of innovation there, the default place to start building services that need LLMs will probably be OpenAI for the foreseeable future.

Mike Hoye, writing on Mastodon:

People go to Stack Overflow because the docs and error messages are garbage. TLDR exists because the docs and error messages are garbage. People ask ChatGPT for help because the docs and error messages are garbage. We are going to lose a generation of competence and turn programming into call-and-response glyph-engine supplicancy because we let a personality cult that formed around the PDP-11 in the 1970s convince us that it was pure and good that docs and error messages are garbage.

Mike takes a look at what can go wrong when writing a one-line “Hello World” program in C. It’s a darkly comic example of the slapstick violence that developers inflict on one other.

It’s not just error messages and documentation. Today’s tools and frameworks overflow with violence. Its omnipresence inures us to it; we cast blame anywhere but where we should. All developers suffer for it but new developers suffer disproportionately more.

Anil Dash gave a delightful keynote address at last week’s Oh, the Humanity! conference.

I checked and was not surprised to learn that Anil and I are very close in age. We were kids when personal computers were new and we were probably both in middle or high school when the web was born. We seem to have similar perspectives on the why of the open web: why building a more open web is — in addition to being fun — an important public good.

Anil’s framing is very personal, though, and I found it very moving. The full talk is available on YouTube and feels like it deserves more than its current ~650 views.

Otis Health

Today we launched our newest PSL spinout, Otis Health.

In its first iteration, Otis offers a free discount pharmacy card to the underserved 1099 worker market. A shocking percentage of contract workers have no insurance whatsoever; saving even a handful of dollars on medications can make a meaningful difference. Otis can often save much more than that.

Otis Health: as easy as 1-2-3

I’m excited about Otis on three fronts. First, it brings a modern design and user experience to a market that until today has sorely lacked it. Second, Otis’ discount card is free to use; all the economic magic happens behind the scenes, in the form of complex contracts between retail chains, benefits managers, distributors, and manufacturers. Finally, Otis has its eyes firmly fixed on several adjacent services that we think will also meaningfully improve the lives of 1099 workers in the future.

Oh, and one more (the most important!) reason to be excited: the team. It’s been a blast working with Aaron, Luke, Sanford, Sharon, and Steve to get this thing out the door.

Craig Hockenberry, writing on his long-lived personal blog:

Well, it happened.

We knew it was coming.

A prick pulled the plug.

Over the weekend, Tweetbot, Twitterrific, and every other popular third-party Twitter client was unceremoniously banned. It’s a stupid petty move on Twitter’s part, executed in an impressively stupid petty way. I imagine it’s the final nail in the coffin for several high-profile Twitter hangers-on.

Most of the people I follow, though? They’re long gone.

Ran across a software blog post that made me feel, well, old:

In traditional web applications, web pages are rendered on the client side. The browser receives a blob of JavaScript from the server, processes it, and paints the UI that the user sees.

This is not the “tradition” I grew up with!

(It’s also not a “tradition” I’ve ever particularly loved.)

Me, five weeks ago:

For me, it’s not really about Elon.

Musk, yesterday:

My pronouns are Prosecute/Fauci

Yeah, well, I take it back. It is about Elon. Once upon a time, Twitter was fun. It’s a shame a midwit troll had to come along and wreck the party.

I suppose all things must end. Onward.

Definitely the best and most important bit of music my 8yo and I have made together:

Cats Rule (Dogs Drool)

Phase Transitions

GPT-3 shipped two years ago. Its capabilities, and that of descendant language models like OpenAI Codex, astonished me on day one; two years later, I’m still just as astonished.

Earlier this week OpenAI announced ChatGPT, a new variant intended to be used in interactive dialogue. With ChatGPT I find myself astonished all over again.

One delightful discovery is that back-and-forth conversation is a good way to build code. It’s a form of literate programming that Knuth probably never imagined.

I paid for GitHub Copilot the moment I could. Copilot, which builds on top of Codex, is more than just astonishing: it’s useful. Yet, as impressive as Copilot is, playing with ChatGPT makes it clear that Copilot has the potential to do vastly more even today.

The Internet has done its work and there are too many fun ChatGPT code samples to choose from. The most compelling examples refine code just by talking it over. There are also countless short examples that might have been accomplished with the previous generation of Codex or GPT-3 but that seem to have renewed potency today, like explaining a bug or exploiting a buffer overflow. But if I were to highlight just one example that seems to capture the moment, it might be this complete absurdity from Riley Goodside:

Wise guys get wise to big-O

Computers pretending to be gangsters with a knack for complexity theory. That is where we’re at today.

We’re at one of those handful of moments when our industry undergoes a deep and lasting phase transition. It’s easy to draw parallels with previous transitions like the advent of the Web in 1991. Then, as now, a new technology was introduced that collected recent advances into a package that felt wholly new. Even in its first version, that new technology was instantly useful in spite of its obvious flaws. It was easy to imagine a long road of improvements ahead. Most of all: people couldn’t look away. The Web struck like lightning. Large language models? Much the same.

Having said this, there’s also something that feels entirely different to me about this moment. We’re playing with language. Language is primal. It is quintessentially human. It is fire. It’s no accident that GPT-3 was built by an organization whose stated ambition is to develop the world’s first artificial general intelligence.

What lies on the other side of this transition? I don’t know; until we get there, I’ll have to content myself with those fast-talkin’ wise guys. Hopefully they can teach me another thing or two about computer programming along the way.

It’s snowing outside. I decide to listen to A Charlie Brown Christmas. A bit early, but still: snow!

At the same time, I’m playing with GPT-3’s new text-davinci-003 model. It’s impressive.

A couple weeks ago, Adam and I tried to get GPT-3 to spit out chord progressions. Today’s updated GPT-3 seems pretty good: sure, some of my prompts result in sonic (or textual) mayhem, but many more result in ii-V-I and other legible changes.

I punch in an oddball prompt:

Give me the chords to a straight-ahead jazz reharmonization of Erik Satie's Gymnopedie No. 1. A simple list will do.

GPT-3 replies:

Ebmaj7 - Bbmaj7 - Ebmaj7 - Bbmaj7 - Cm7 - Fm7 - Gm7 - Bb7 - Eb7 - F7

Okay, those first repeating chords make some sense; I can tack a melody to that. The next are a bit mysterious, but I give it a go and eventually find a way to make them fit. I toy with chromatic approaches while Vince Guaraldi sinks in and somehow end up with… well, “Gymnopedie No. GPT-3” I suppose.

Gymnopedie No. GPT-3

I’m taking a break from Twitter.

My usage tailed off a while ago when I realized that my old-school RSS reader was once again both more valuable and more joyful. Blogs have had quite the resurgence in the past three or four years; I want to be a part of that!

For me, it’s not really about Elon. The timing just feels right.