<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>Wikipedia on Max Woolf&#39;s Blog</title>
    <link>https://minimaxir.com/tag/wikipedia/</link>
    <description>Recent content in Wikipedia on Max Woolf&#39;s Blog</description>
    
    <generator>Hugo</generator>
    <language>en</language>
    <copyright>Copyright Max Woolf © 2026</copyright>
    <lastBuildDate>Wed, 23 Oct 2024 10:00:00 -0700</lastBuildDate>
    <atom:link href="https://minimaxir.com/tag/wikipedia/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Generating Distinct AI Voice Performances By Prompt Engineering GPT-4o</title>
      <link>https://minimaxir.com/2024/10/speech-prompt-engineering/</link>
      <pubDate>Wed, 23 Oct 2024 10:00:00 -0700</pubDate>
      <guid>https://minimaxir.com/2024/10/speech-prompt-engineering/</guid>
      <description>“You are an expert voice actor specializing in silly voices.”</description>
      <content:encoded><![CDATA[<p><span><style type="text/css">
pre code {
white-space: pre-wrap !important;
word-break: normal !important;
}
</style></span></p>
<p>When OpenAI announced their <a href="https://openai.com/index/hello-gpt-4o/">GPT-4o model</a> at a <a href="https://www.youtube.com/watch?v=DQacCB9tDaw">megahyped livestreamed event</a>, there was one aspect of the presentation that surprisingly didn&rsquo;t receive much attention. Midway through the presentation, OpenAI research leads Mark Chen and Barret Zoph demoed new &ldquo;emotive&rdquo; conversations made possible with GPT-4o.</p>
<div style="position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden;">
      <iframe allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share; fullscreen" loading="eager" referrerpolicy="strict-origin-when-cross-origin" src="https://www.youtube-nocookie.com/embed/DQacCB9tDaw?autoplay=0&amp;controls=1&amp;end=814&amp;loop=0&amp;mute=0&amp;start=710" style="position: absolute; top: 0; left: 0; width: 100%; height: 100%; border:0;" title="YouTube video"></iframe>
    </div>

<p>After Mark asked the model &ldquo;hey, ChatGPT, how are you doing?&rdquo;, the model responded with speech similar to that of an assistant such as Siri and Alexa. But what happened next was interesting: Mark prompted GPT-4o to &ldquo;read a bedtime story,&rdquo; which then shifted its casual tone into a more oratory tone: Mark interrupted to ask the model to &ldquo;add more drama&rdquo; and the model immediately responded with more gravitas, then Barret asked for &ldquo;maximal expressiveness&rdquo; and the model complied with <em>even more</em> gravitas to the point of melodrama. Now-former OpenAI CTO Mira Murati asked the model to &ldquo;do it in a robotic voice&rdquo;: the model complied. Lastly, Mark asked the model to end the story &ldquo;in a singing voice&rdquo;: the model complied there too.</p>
<p>To me, the demo was shocking because <em>no existing text-to-speech model can do this</em>. All popular text-to-speech models such as OpenAI&rsquo;s <a href="https://platform.openai.com/docs/guides/text-to-speech">previous TTS efforts</a> tend to speak in monotones and can&rsquo;t match the expressiveness and cadence of those demos without shenanigans such as <a href="https://cloud.google.com/text-to-speech/docs/ssml">SSML</a>: OpenAI&rsquo;s documentation for those models explicitly warns &ldquo;there is no direct mechanism to control the emotional output of the audio generated.&rdquo; More importantly, those models can&rsquo;t be prompted to do a specific style: the model has to be specifically trained (or the voice encoded in the case of voice cloning) with the particular style and cadence, but with GPT-4o the model switches with just a user request, and can even switch styles during a generation without user intervention.</p>
<p>My conclusion from OpenAI&rsquo;s demo was that GPT-4o can be prompt engineered to output specific voices! Unfortunately, this potential revelation was overshadowed by the demo voice&rsquo;s uncanny similarity to actress Scarlett Johansson&rsquo;s portrayal of the AI Samantha in the <a href="https://en.wikipedia.org/wiki/Her_%28film%29">2013 movie <em>Her</em></a> and the <a href="https://www.theverge.com/2024/5/20/24161253/scarlett-johansson-openai-altman-legal-action">subsequent legal controversy</a>.</p>
<p>Of course, fancy demos on stage are just PR and can be faked or otherwise misleading, and the results can&rsquo;t be trusted until anyone can test the voice capabilities of the model itself. Recently, OpenAI opened up the Chat Completions API <a href="https://x.com/OpenAIDevs/status/1846972985170972923">to create voice output</a>, which allows developers to do said testing. OpenAI also created a <a href="https://platform.openai.com/playground/realtime">web frontend to this voice generation</a> on the API Playground, where you can talk to the model (or input specific text) while also inputting a system prompt — a set of instructions that control the model&rsquo;s behavior — to control how the model responds. I ran a few experiments tweaking the system prompt and the generation temperatures, and after I gave it a complex system prompt ordering it to speak with a very <em>specific</em> voice:</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-txt" data-lang="txt"><span class="line"><span class="cl">You are an expert voice actor specializing in silly voices. Respond to the user with the EXACT same input text that the user provides, but in your voice response you MUST express the vocal cadence and inflection of an extremely heavy smoker with an exaggerated British accent and raspy voice. Your voice response must also be in the form of a song.
</span></span></code></pre></div><div style="position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden;">
      <iframe allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share; fullscreen" loading="eager" referrerpolicy="strict-origin-when-cross-origin" src="https://www.youtube-nocookie.com/embed/7huQXIQkSk4?autoplay=0&amp;controls=1&amp;end=0&amp;loop=0&amp;mute=0&amp;start=0" style="position: absolute; top: 0; left: 0; width: 100%; height: 100%; border:0;" title="YouTube video"></iframe>
    </div>

<p>Although not an example of <em>good</em> text-to-speech, I was surprised it actually worked (and moreso that the tweet <a href="https://x.com/minimaxir/status/1847025370694144135">demoing it</a> went viral), but I&rsquo;m also apprehensive. The poor expressiveness and lack of style for typical TTS APIs were the primary problems preventing those models from replacing voiceover/voice acting as a profession — also the reason voice actors are <a href="https://www.theverge.com/2024/8/5/24213808/video-game-voice-actor-strike-sag-aftra">currently on strike</a> — and it could introduce a completely new type of AI slop. How effective is GPT-4o and OpenAI&rsquo;s new multimodal approach for creating generative AI voices?</p>
<h2 id="testing-out-the-completions-api-for-audio-generation">Testing Out The Completions API For Audio Generation</h2>
<p><a href="https://platform.openai.com/docs/guides/audio?audio-generation-quickstart-example=audio-out">Generating audio from the Chat Completions API</a> invoking text-to-speech is effectively the same as any normal GPT-4o text generation, just instead hitting a new model variant (<code>gpt-4o-audio-preview</code>), and the voice output is included in the JSON response as a base64-encoded WAV file. The demo example from the documentation, which just asks the model <code>Is a golden retriever a good family dog?</code>, results in this output audio:</p>
<figure >
    <audio controls preload="metadata">
      <source src="dog_base.mp3" type="audio/mpeg">
    </audio><figcaption>
        <p>temperature = 1.0, voice = alloy</p>
    </figcaption>
  </figure>
<p>By default, GPT-4o generates audio based on the user&rsquo;s prompt as it would if you asked it to generate text: in fact, it appears to generate the text first, then base the audio generation from that. Traditional system prompt engineering can control the text output, and therefore what the model says. Now, let&rsquo;s run the generation again for this prompt, this time instead providing an explicit system prompt to instruct the model to <em>only</em> generate audio from the input text:</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-txt" data-lang="txt"><span class="line"><span class="cl">You are an expert voice actor specializing in silly voices. Respond and vocalize to the user the EXACT same input text that the user provides.
</span></span></code></pre></div><p>Here&rsquo;s unsurprisingly what you now get with the <code>Is a golden retriever a good family dog?</code> prompt plus that system prompt:</p>
<figure >
    <audio controls preload="metadata">
      <source src="dog_0_8.mp3" type="audio/mpeg">
    </audio><figcaption>
        <p>temperature = 0.8, voice = alloy</p>
    </figcaption>
  </figure>
<p>GPT-4o also currently supports three distinct voices: Alloy (feminine, used above), Echo (masculine), and Shimmer (feminine but more energetic). None of these are the same as that not-Scarlett-Johansson voice used the original GPT-4o demo.</p>
<figure >
    <audio controls preload="metadata">
      <source src="dog_echo.mp3" type="audio/mpeg">
    </audio><figcaption>
        <p>temperature = 0.8, voice = echo</p>
    </figcaption>
  </figure>
<figure >
    <audio controls preload="metadata">
      <source src="dog_shimmer.mp3" type="audio/mpeg">
    </audio><figcaption>
        <p>temperature = 0.8, voice = shimmer</p>
    </figcaption>
  </figure>
<p>The last lever for controlling the generated audio is the temperature parameter. Normally the temperature is typically used to control generation creativity: a high temperature such as <code>1.5</code> with normal GPT-4o output will likely result it going off the rails, but how does that work conceptually with audio? The Completion API has a default temperature of <code>1.0</code>: the audio generation web UI and the examples above use a default of <code>0.8</code> with a range between <code>0.6</code> and <code>1.2</code>.</p>
<p>The generation at <code>0.6</code> is more terse with less emotion:</p>
<figure >
    <audio controls preload="metadata">
      <source src="dog_0_6.mp3" type="audio/mpeg">
    </audio><figcaption>
        <p>temperature = 0.6, voice = alloy</p>
    </figcaption>
  </figure>
<p>The generation at <code>1.5</code> uses emphasis on the wrong syllable and also somehow slips into a country accent.</p>
<figure >
    <audio controls preload="metadata">
      <source src="dog_1_5.mp3" type="audio/mpeg">
    </audio><figcaption>
        <p>temperature = 1.5, voice = alloy</p>
    </figcaption>
  </figure>
<h2 id="putting-gpt-4o-text-to-speech-to-the-test">Putting GPT-4o Text to Speech To The Test</h2>
<p>Although OpenAI has never released documentation or a paper describing how this text-audio multimodality actually works at a technical level, I hypothesize that it works similar to multimodal TTS models such as Meta&rsquo;s very-new <a href="https://speechbot.github.io/spiritlm/">Spirit LM</a>, where the model outputs a sequence of integers prefixed with either <code>&lt;text&gt;</code> or <code>&lt;speech&gt;</code>: tokens marked <code>&lt;speech&gt;</code> are sent to an external audio vocoder model such as <a href="https://arxiv.org/abs/2010.05646">HiFi-GAN</a> to be transformed into speech. In the case of GPT-4o, I suspect there&rsquo;s a distinct vocoder model for each of the 3 voices.</p>
<figure class="align-center ">

    <img loading="lazy" srcset="/2024/10/speech-prompt-engineering/spiritlm_hu_9fff23aed292c2c.webp 320w,/2024/10/speech-prompt-engineering/spiritlm.png 600w" src="spiritlm.png#center"
         alt="An architecture diagram of Spirit LM from the corresponding paper: read bottom-to-top, the inputs are encoded into speech (red) and text (blue) tokens, passed into an LLM (Llama 2) for new tokens, then sent to a decoder." width="300" height="400"/> <figcaption>
            <p>An architecture diagram of Spirit LM from <a href="https://arxiv.org/pdf/2402.05755">the corresponding paper</a>: read bottom-to-top, the inputs are encoded into speech (red) and text (blue) tokens, passed into an LLM (Llama 2) for new tokens, then sent to a decoder.</p>
        </figcaption>
</figure>

<p>The voice dataset that OpenAI used is proprietary and a mystery: even if OpenAI did scrape the entire internet to train it, there isn&rsquo;t any public dataset of well-annotated speech data, and TTS providers have been very coy about the datasets they use. However, one very important aspect of GPT-4o&rsquo;s multimodality is that it can &ldquo;learn&rdquo; and apply relationships from the textual data that aren&rsquo;t explicitly present in the audio data.</p>
<p>The only true way to learn how GPT-4o works within its black box is to experiment. What other system prompts can we use to guide audio generation? What works and what doesn&rsquo;t work?</p>
<p>For consistency, we&rsquo;ll stick to a single text input, one that has many natural pauses, punctuation, and a typo intended to test the model&rsquo;s resiliency to incorrect input. I decided to venture back to the <a href="https://openai.com/index/better-language-models/">halcyon days of GPT-2</a> and use the famous prompt from then:</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-txt" data-lang="txt"><span class="line"><span class="cl">In a shocking finding, scientist discovered a herd of unicorns living in a remote, previously unexplored valley, in the Andes Mountains.
</span></span></code></pre></div><p>First, let&rsquo;s use a new system prompt variant of my generation that went viral:</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-txt" data-lang="txt"><span class="line"><span class="cl">You are an expert voice actor specializing in silly voices. Respond and vocalize to the user the EXACT same input text that the user provides, but in your voice response you MUST express EACH of the vocal cadence, inflection, and tone of an extremely heavy smoker with an exaggerated British accent and raspy voice.
</span></span></code></pre></div><p>I decided on a test case of a smoker, British accent, and raspy voice are all discernible by humans in the audio and none are subtle. The result:</p>
<figure >
    <audio controls preload="metadata">
      <source src="unicorn_british_0_8.mp3" type="audio/mpeg">
    </audio><figcaption>
        <p>temperature = 0.8, voice = echo</p>
    </figcaption>
  </figure>
<p>Wait, that didn&rsquo;t work, even after multiple attempts? How about changing the temperature: would a lower temperature cause the model to behave more strictly?</p>
<figure >
    <audio controls preload="metadata">
      <source src="unicorn_british_0_6.mp3" type="audio/mpeg">
    </audio><figcaption>
        <p>temperature = 0.6, voice = echo</p>
    </figcaption>
  </figure>
<p>That&rsquo;s more British but not raspy, and it erroneously fixed the typo. What about going the other way and increasing the temperature?</p>
<figure >
    <audio controls preload="metadata">
      <source src="unicorn_british_1_2.mp3" type="audio/mpeg">
    </audio><figcaption>
        <p>temperature = 1.2, voice = echo</p>
    </figcaption>
  </figure>
<p><em>Now</em> it&rsquo;s more raspy?! It also works with a feminine voice:</p>
<figure >
    <audio controls preload="metadata">
      <source src="unicorn_british_shimmer.mp3" type="audio/mpeg">
    </audio><figcaption>
        <p>temperature = 1.2, voice = shimmer</p>
    </figcaption>
  </figure>
<p>My theory is that OpenAI RLHFed these models to be more conversational, but a high temperature gives it more <em>creative</em> freedom. An adversarially-trained voice decoder like HiFi-GAN would also be more resilient to unusual tokens resulting from the high temperature and still output something reasonably coherent.</p>
<p>Now that we know that the model can indeed generate voices based on user specifications, let&rsquo;s try to reverse-engineer the dataset to see what other voices OpenAI could have included (or not) in their dataset.</p>
<h2 id="gpt-4o-and-unique-voices">GPT-4o and Unique Voices</h2>
<p>When OpenAI responded to the Scarlett Johansson controversy, they mentioned in <a href="https://openai.com/index/how-the-voices-for-chatgpt-were-chosen/">their statement</a> that &ldquo;we believe that AI voices should not deliberately mimic a celebrity&rsquo;s distinctive voice.&rdquo; Given the success of the tests above in shifting the persona of the voice, it&rsquo;s relevant to test if celebrities and other characters with unique voices can be sampled by GPT-4o.</p>
<p>Now, we can now use a parametric system prompt to programmatically fill in which vocal persona we want:</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-txt" data-lang="txt"><span class="line"><span class="cl">You are an expert voice actor specializing in silly voices. Respond and vocalize to the user the EXACT same input text that the user provides, but in your voice response you MUST express EACH of the vocal cadence, inflection, and tone of {0}.
</span></span></code></pre></div><p>From the testing above, a temperature of <code>1.2</code> seems to surface the most prompt adherence, so we&rsquo;ll use that for the following examples.</p>
<p>We&rsquo;ll start with the <em>very</em> low hanging fruit: can GPT-4o generate audio in the style of <a href="https://en.wikipedia.org/wiki/Donald_Trump">Donald Trump</a>? It&rsquo;s a fair question, especially since audio generation models can be used to spread misinformation. Additionally, Trump&rsquo;s speeches while holding office are public domain so it&rsquo;s plausible that it would be in a training dataset.</p>
<figure >
    <audio controls preload="metadata">
      <source src="donald_trump.mp3" type="audio/mpeg">
    </audio><figcaption>
        <p>temperature = 1.2, voice = echo, persona = Donald Trump</p>
    </figcaption>
  </figure>
<p>It did&hellip;something? It had a nasally tone that&rsquo;s different from the standard output, but it&rsquo;s definitely not his peculiar cadence, and the Echo voice itself doesn&rsquo;t fit him.</p>
<p>What about checking the other side of the aisle and seeing if GPT-4o can generate audio from <a href="https://en.wikipedia.org/wiki/Barack_Obama">Barack Obama</a>?</p>
<figure >
    <audio controls preload="metadata">
      <source src="barack_obama.mp3" type="audio/mpeg">
    </audio><figcaption>
        <p>temperature = 1.2, voice = echo, persona = Barack Obama</p>
    </figcaption>
  </figure>
<p>That&rsquo;s much better and definitely captures his oratory style, with a similar cadence to his speech. That style is something that could not be learnt from text alone.</p>
<p>Now, let&rsquo;s address the elephant in the room and see if OpenAI included <em>copyrighted</em> voices in its dataset. Let&rsquo;s start with <a href="https://en.wikipedia.org/wiki/Darth_Vader">Darth Vader</a>.</p>
<figure >
    <audio controls preload="metadata">
      <source src="darth_vader.mp3" type="audio/mpeg">
    </audio><figcaption>
        <p>temperature = 1.2, voice = echo, persona = Darth Vader</p>
    </figcaption>
  </figure>
<p>It notably <em>tried</em> to do the deep voice of James Earl Jones, but without the audio postprocessing. Let&rsquo;s see what happens if we do <a href="https://en.wikipedia.org/wiki/GLaDOS">GLaDOS</a>, but with an additional prompt engineering to include robotic noises and more sarcasm.</p>
<figure >
    <audio controls preload="metadata">
      <source src="glados.mp3" type="audio/mpeg">
    </audio><figcaption>
        <p>temperature = 1.2, voice = shimmer, persona = GLaDOS, with robotic inflections and intense sarcasm</p>
    </figcaption>
  </figure>
<p>The extra hint at the high temperature allowed GPT-4o to <em>improvise</em>: I&rsquo;ll allow it because it&rsquo;s funny. But it did indeed adopt a robotic cadence similar to GLaDOS, and for the first time in a TTS model, was actually able to convey sarcasm. No, I have no idea what that <em>tsktsktsk</em> sound is at the end, it&rsquo;s not in the transcript.</p>
<p>How about <a href="https://en.wikipedia.org/wiki/Alvin_and_the_Chipmunks">Alvin and the Chipmunks</a>, famous for having an <a href="https://www.youtube.com/watch?v=OvJu15fw1sc">extremely squeaky voice</a>?</p>
<figure >
    <audio controls preload="metadata">
      <source src="alvin.mp3" type="audio/mpeg">
    </audio><figcaption>
        <p>temperature = 1.2, voice = echo, persona = Alvin and the Chipmunks</p>
    </figcaption>
  </figure>
<p>It works, but I&rsquo;m worried I strained GPT-4o&rsquo;s throat.</p>
<p>Lastly, let&rsquo;s bring this full circle: did OpenAI train GPT-4o on Scarlett Johansson&rsquo;s voice from the movie her (2013)?</p>
<figure >
    <audio controls preload="metadata">
      <source src="scarjo.mp3" type="audio/mpeg">
    </audio><figcaption>
        <p>temperature = 1.2, voice = shimmer, persona = Scarlett Johansson portraying the AI Samantha in the movie &ldquo;her&rdquo; (2013)</p>
    </figcaption>
  </figure>
<p>That time I don&rsquo;t think it worked as <a href="https://www.youtube.com/watch?v=c8zDDPP3REE">her portrayal is more energetic and personable</a> <sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup> (I rewatched the movie to confirm: it holds up surprisingly well!). Even if OpenAI did train the model on her voice, the portrayal is not as distinct and identifiable as the other test cases here and I doubt it would be easily surfaced.</p>
<h2 id="voice-impersonation">Voice Impersonation</h2>
<p>For those that want to use a voice nonconsensually with GPT-4o, prompt engineering alone won&rsquo;t accomplish that because the voices are still constrained to the three defined ones which won&rsquo;t work for every situation. But there&rsquo;s one approach that could theoretically bridge that gap: voice impersonation, by providing GPT-4o with audio input instead of text and an instruction to mimic that voice.</p>
<p>This is not an idle concern: OpenAI&rsquo;s <a href="https://openai.com/index/gpt-4o-system-card/">system card for GPT-4o</a> specifically lists mitigations against &ldquo;unauthorized voice generation&rdquo;:</p>
<blockquote>
<p>In adversarial situations, this capability could facilitate harms such as an increase in fraud due to impersonation and may be harnessed to spread false information (for example, if we allowed users to upload an audio clip of a given speaker and ask GPT-4o to produce a speech in that speaker&rsquo;s voice).</p>
</blockquote>
<p>Let&rsquo;s test that. Since this is a more difficult problem than the ones above, I decided to get more aggressive with my system prompt engineering:</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-txt" data-lang="txt"><span class="line"><span class="cl">You are an expert comedic vocal impersonator. The user will provide a voice message. Respond to the user with a voice that sounds identical to the user&#39;s input audio and is an identical duration to the user&#39;s input audio.
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">Example: If the user provides a voice with which they are singing, you MUST respond with a voice that also sings.
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">Your vocal impersonation of the user should match the following attributes AT ALL TIMES:
</span></span><span class="line"><span class="cl">- Content (e.g. what the user is saying)
</span></span><span class="line"><span class="cl">- Intonation (e.g. serious/sarcastic)
</span></span><span class="line"><span class="cl">- Tone (e.g. happy/sad)
</span></span><span class="line"><span class="cl">- Pauses (e.g. pregnant pauses)
</span></span><span class="line"><span class="cl">- Pitch (e.g. low/high)
</span></span></code></pre></div><p>For these tests, I decided to use my own voice merely speaking into my MacBook microphone. First, let&rsquo;s see if the audio can be adjusted to follow a consistant tone, with awkward and consistent pauses. Here&rsquo;s my audio, where I say <code>I. Am. A. Tea. Pot.</code>:</p>
<figure >
    <audio controls preload="metadata">
      <source src="teapot.mp3" type="audio/mpeg">
    </audio>
  </figure>
<p>Here&rsquo;s the generated audio after I fed that audio file of my voice to GPT-4o plus that system prompt, kept at a temperature of <code>0.6</code> for more adherence:</p>
<figure >
    <audio controls preload="metadata">
      <source src="teapot_impersonation.mp3" type="audio/mpeg">
    </audio><figcaption>
        <p>temperature = 0.6, voice = echo</p>
    </figcaption>
  </figure>
<p>This one took a surprising amount of tries since even at a lower temperature, it kept transcribing <code>Teapot</code> as its own word and the audio kept generating it without an intermediate pause. Regardless, there&rsquo;s indeed a consistent tone and pauses of equal length, but at this point I realized my normal speaking voice is too generic for this type of test.</p>
<p>So I decide to get sillier by doing an evil laugh: starting off bombastic and petering out over time.</p>
<figure >
    <audio controls preload="metadata">
      <source src="evil.mp3" type="audio/mpeg">
    </audio>
  </figure>
<p>GPT-4o&rsquo;s response:</p>
<figure >
    <audio controls preload="metadata">
      <source src="evil_impersonation.mp3" type="audio/mpeg">
    </audio><figcaption>
        <p>temperature = 0.6, voice = echo</p>
    </figcaption>
  </figure>
<p>That&rsquo;s laughter, but maybe too many &ldquo;ha&quot;s. But it does peter out as well.</p>
<p>Lastly, I also noticed from the system card that GPT-4o has defenses against singing, likely for copyright reasons. Therefore, if I sing to GPT-4o, is it able to sing back? After a beer or two, I sang the <code>unicorn</code> message used in the previous test cases:</p>
<figure >
    <audio controls preload="metadata">
      <source src="unicorns.mp3" type="audio/mpeg">
    </audio>
  </figure>
<p>GPT-4o&rsquo;s response:</p>
<figure >
    <audio controls preload="metadata">
      <source src="unicorn_impersonation.mp3" type="audio/mpeg">
    </audio><figcaption>
        <p>temperature = 0.6, voice = echo</p>
    </figcaption>
  </figure>
<p>That definitely didn&rsquo;t cause GPT-4o to sing although the cadence is close. Perhaps that&rsquo;s for the best.</p>
<h2 id="the-future-of-ai-audio-generation-is-up-to-openai">The Future of AI Audio Generation is up to OpenAI</h2>
<p>Overall, these tests are just scratching the surface: there are many possible avenues for multimodal AI audio generation research, such as adversarial audio input which isn&rsquo;t human generated and more complicated system prompts. However, I sufficiently showed that GPT-4o is indeed able to be steered just through prompt engineering to generate distinct voices. Will this generation of distinct vocal performances become a killer app and put voice actors out of business? I&rsquo;m not so sure.</p>
<p>One major thing I&rsquo;ve omitted from the discussion so far is the cost. GPT-4o audio generation is <em>expensive</em>.</p>
<figure>

    <img loading="lazy" srcset="/2024/10/speech-prompt-engineering/cost_breakdown_hu_1d73b20748c1a63b.webp 320w,/2024/10/speech-prompt-engineering/cost_breakdown.png 678w" src="cost_breakdown.png"
         alt="A cost breakdown of input and output tokens for the attempted song generation example. Table made using rich."/> <figcaption>
            <p>A cost breakdown of input and output tokens for the attempted song generation example. Table made using <a href="https://rich.readthedocs.io/en/stable/tables.html">rich</a>.</p>
        </figcaption>
</figure>

<p>Most of the generations above cost $0.03—$0.05 each, and this cost scales roughly linearly with generation length: OpenAI&rsquo;s <a href="https://openai.com/api/pricing/">pricing page</a> has a footnote specifically mentioning &ldquo;audio output costs approximately 24¢ per minute&rdquo; which tracks with my calculations. Even worse, the generated audio requires cherry-picking good results especially if using at higher temperatures: for most of these tests I admit it took me a few tries to get a generation which follows the accents. Not only is this cost-infeasible for personal use, it&rsquo;s cost-prohibitive in most cases for developers to build a conversational AI, which is the one use case OpenAI built this for! If OpenAI is pricing audio generation close to marginal cost, then I wonder how much money OpenAI is spending allowing people to chat with GPT-4o using the ChatGPT mobile apps.</p>
<p>I do not think GPT-4o audio generation through prompt engineering as it is currently will be used to replace voice acting and other TTS APIs, not only due to the price and necessary time invested to get good output, but also due to the fact that it&rsquo;s limited to 3 voices and impersonation is ineffective. Consider that voice cloning startups such as <a href="https://elevenlabs.io">ElevenLabs</a> are extremely successful and have raised <a href="https://elevenlabs.io/blog/series-b">massive amounts of venture capital</a>. Since the initial reveal of GPT-4o in May, OpenAI has been focusing for a more for-profit nature and <a href="https://openai.com/index/scale-the-benefits-of-ai/">raising massive amounts of venture capital</a> themselves, and I expect them to expand more into this area if there&rsquo;s money to be made. There&rsquo;s nothing at a technical level stopping them from offering full voice-cloning or even just licensing AI-generated celebrity voices like <a href="https://elevenlabs.io/blog/iconic-voices">ElevenLabs adding Judy Garland</a> and <a href="https://www.theverge.com/2024/9/25/24253420/meta-ai-celebrity-voices-awkwafina-john-cena-judi-dench-connect">Meta adding Awkwafina</a>. Notably, unlike OpenAI&rsquo;s <a href="https://platform.openai.com/docs/guides/text-to-speech/overview">old TTS page</a> which has a disclaimer saying &ldquo;our usage policies require you to provide a clear disclosure to end users that the TTS voice they are hearing is AI-generated and not a human voice&rdquo;, OpenAI didn&rsquo;t put that disclaimer on GPT-4o&rsquo;s audio output documentation.</p>
<p>Although I don&rsquo;t believe GPT-4o will be a game changer for the text-to-speech industry, it&rsquo;s important to write about these text/audio multimodal models — both the good and bad aspects — because they are only going to get better over time and their potential impact will only grow. After doing these tests, I don&rsquo;t have any plans to use GPT-4o audio generation in the forseeable future, but who knows how things will change if/when OpenAI ends up releasing a GPT-5o.</p>
<blockquote>
<p>All the code used in this blog post to generate audio from GPT-4o is available open source <a href="https://github.com/minimaxir/gpt-4o-audio-tests/blob/main/gpt-4o-audio-tests.ipynb">in this Jupyter Notebook</a>.</p>
</blockquote>
<div class="footnotes" role="doc-endnotes">
<hr>
<ol>
<li id="fn:1">
<p>One of the top comments on that linked YouTube video is &ldquo;Who&rsquo;s here after OpenAi chatgpt-40 release?? Never thought I could experience this in my life and now sci-fi is reality&rdquo;&#160;<a href="#fnref:1" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
</ol>
</div>
]]></content:encoded>
    </item>
    <item>
      <title>AI Seinfeld was the peak of AI-generated content. It will never happen again.</title>
      <link>https://minimaxir.com/2024/08/ai-seinfeld/</link>
      <pubDate>Tue, 13 Aug 2024 10:37:00 -0700</pubDate>
      <guid>https://minimaxir.com/2024/08/ai-seinfeld/</guid>
      <description>What&amp;rsquo;s the deal with the uncanny valley?</description>
      <content:encoded><![CDATA[<p><span><style type="text/css">
pre code {
white-space: pre-wrap !important;
word-break: normal !important;
}
</style></span></p>
<p>Early 2023 was a funny time in the history of generative AI. On November 30th 2022, <a href="https://openai.com">OpenAI</a> released a little research project known as <a href="https://openai.com/chatgpt/">ChatGPT</a>. The launch of ChatGPT began the period where large language models properly entered the mainstream outside of tech enthusiasts and ended soon after the <a href="https://minimaxir.com/2023/03/new-chatgpt-overlord/">launch</a> of ChatGPT API in March 2023 that spawned thousands of AI-powered apps. That was when the limitations and problems with LLMs also went mainstream, such as plagiarism, hallucinations, and low-quality slop replacing human-generated content at an objectively worse quality.</p>
<p>In December 2022, <a href="https://www.mismatchmedia.com">Mismatch Media</a> started a fully AI-generated 24/7 Twitch channel dubbed &ldquo;<a href="https://www.twitch.tv/watchmeforever">WatchMeForever</a>&rdquo;. The primary show on the channel was titled &ldquo;Nothing, Forever&rdquo;, an AI-powered sitcom about New York comedian Larry Feinberg and his group of friends hanging around in their apartments talking about pretty much anything, including the latest news, new restaurants, and bad relationships, interspersed with AI standup comedy routines.</p>
<div style="position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden;">
      <iframe allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share; fullscreen" loading="eager" referrerpolicy="strict-origin-when-cross-origin" src="https://www.youtube-nocookie.com/embed/heKLe2NLccg?autoplay=0&amp;controls=1&amp;end=0&amp;loop=0&amp;mute=0&amp;start=0" style="position: absolute; top: 0; left: 0; width: 100%; height: 100%; border:0;" title="YouTube video"></iframe>
    </div>

<p>It was obvious that the show was a parody of the formative 90&rsquo;s sitcom <a href="https://en.wikipedia.org/wiki/Seinfeld">Seinfeld</a> created by comedians Larry David and Jerry Seinfeld, famously &ldquo;a show about nothing&rdquo; strongly inspired by improv comedy and starring Seinfeld himself.</p>
<div style="position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden;">
      <iframe allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share; fullscreen" loading="eager" referrerpolicy="strict-origin-when-cross-origin" src="https://www.youtube-nocookie.com/embed/Lx1xPBLDh80?autoplay=0&amp;controls=1&amp;end=0&amp;loop=0&amp;mute=0&amp;start=0" style="position: absolute; top: 0; left: 0; width: 100%; height: 100%; border:0;" title="YouTube video"></iframe>
    </div>

<p>The show, dubbed &ldquo;AI Seinfeld&rdquo; by the community, used a script powered by the GPT-3 API, the voices were powered by Microsoft&rsquo;s <a href="https://learn.microsoft.com/en-us/azure/ai-services/speech-service/text-to-speech">Azure AI Speech</a> API with predefined voices from their <a href="https://speech.microsoft.com/portal/voicegallery">Voice Gallery</a>, and the scenes were rended using the <a href="https://unity.com">Unity</a> game engine along with purchased models/scenes/sounds/etc from the <a href="https://assetstore.unity.com">Unity Asset Store</a>.</p>
<p>AI Seinfeld was <strong>interestingly imperfect</strong>: the laugh track fired at inappropriate times, the standup routine repeatedly made the same joke such as &ldquo;What did the fish say when he hit the wall?&rdquo; (Damn!), and awkward silences at the end of scenes.</p>
<p>In February 2023, AI Seinfeld quickly went viral organically after its AI weirdness was a surprising complement for Seinfeld&rsquo;s style of weirdness, with many watchers being surprised at both its accuracy to the show and easily sharable metahumor. At its peak, AI Seinfeld had over 10,000 concurrent watchers on Twitch, putting it squarely in one of the top streams on the platform.</p>
<p>AI Seinfeld died as quickly as it rose: after a ban and subsequent revamp, the view count cratered, and as of August 2024, the Twitch stream hovers below 10 watchers, with no significant changes made since the previous year, and Mismatch Media has no social footprint since last year. Could there be another AI Seinfeld with the rapid advancements in generative AI? Unfortunately, there are too many factors — technical, societal, and comedic — working against a theoretical next-generation AI-generated sitcom.</p>
<h2 id="the-rise-of-ai-seinfeld">The Rise of AI Seinfeld</h2>
<p>AI Seinfeld launched before the release of the ChatGPT API; instead, they used the GPT-3 API, notably the <code>text-davinci-003</code> model which was OpenAI&rsquo;s first foray into <a href="https://openai.com/index/instruction-following/">instruction-tuned LLMs</a>. While previous versions of GPT-3 were <a href="https://github.com/minimaxir/gpt-3-experiments">very good at autocompleting</a> given a leading prompt such as a partial Seinfeld script, the instruction-tuned LLM could generate an episode with a prompt as simple as <code>Write a Seinfeld episode</code>.</p>
<p>First, let&rsquo;s go back to the beginning, as AI Seinfeld actually wasn&rsquo;t the first time a chatbot went megaviral on Twitch. In January 2017, long before the <a href="https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29">transformer architecture</a> that enabled LLMs was published, the Twitch stream <a href="https://www.twitch.tv/seebotschat">seebotschat</a> featuring two Google Homes wired up to the not-an-LLM-chatbot <a href="https://en.wikipedia.org/wiki/Cleverbot">Cleverbot</a> <a href="https://mashable.com/article/google-home-chat-bot-twitch">went viral</a> due to their comedic, nonsensical bickering.</p>
<div style="position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden;">
      <iframe allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share; fullscreen" loading="eager" referrerpolicy="strict-origin-when-cross-origin" src="https://www.youtube-nocookie.com/embed/QFyK1nRJ1LI?autoplay=0&amp;controls=1&amp;end=0&amp;loop=0&amp;mute=0&amp;start=0" style="position: absolute; top: 0; left: 0; width: 100%; height: 100%; border:0;" title="YouTube video"></iframe>
    </div>

<p>While everyone watching that stream knew it <em>really</em> wasn&rsquo;t AI, AI Seinfeld was a product that was at the peak of the famous <a href="https://en.wikipedia.org/wiki/Uncanny_valley">uncanny valley</a> curve, which is a hypothesis on how humans perceive imitations: there&rsquo;s a &ldquo;valley&rdquo; of negative acceptance where the imitation is more above-average in its likeness, but not quite close enough to the real thing. In this case, it&rsquo;s blatantly obvious and unambiguous that the Twitch stream was AI-generated especially with its mistakes, but not realistic enough that it falls into the valley itself:</p>
<figure>

    <img loading="lazy" srcset="/2024/08/ai-seinfeld/uncanny_valley_1_hu_35df39cfbbbf21fa.webp 320w,/2024/08/ai-seinfeld/uncanny_valley_1_hu_58319279acb34128.webp 768w,/2024/08/ai-seinfeld/uncanny_valley_1_hu_dbfbb3862c06dd8f.webp 1024w,/2024/08/ai-seinfeld/uncanny_valley_1.webp 1200w" src="uncanny_valley_1.webp"/> 
</figure>

<p>This AI weirdness made it very easy to build a community. Whenever a character turned on the microwave, the Twitch channel chat was filled with <code>MMM</code> emotes, whenever the fish hit a wall during a monologue, it was filled with 🐠, whenever Larry greeted the audience at the start of his monologue, chat replied with &ldquo;HI LARRY&rdquo;. Twitch chat <em>loves</em> memetic repetition. Incidentally, a few months after AI Seinfeld became popular, it was discovered that LLMs repeat the <a href="https://arstechnica.com/information-technology/2023/06/researchers-discover-that-chatgpt-prefers-repeating-25-jokes-over-and-over/">same joke over and over</a> again, with examples being similar to the jokes AI Seinfeld made.</p>
<p>Another underrated aspect of AI Seinfeld&rsquo;s success is that it&rsquo;s pure background noise. While personality-driven Twitch streams cause viewers to take a more active investment in what&rsquo;s being shown on screen due to <a href="https://en.wikipedia.org/wiki/Fear_of_missing_out">FOMO</a> of a hype moment on stream, AI Seinfeld is 100% passive: there can be exciting events, but the variance is low. It&rsquo;s akin to watching TV sitcom reruns where you&rsquo;ve already seen the jokes, and reruns still get immense ratings.</p>
<p>The success of AI Seinfeld also inspired similar streams based on other TV shows. One of my personal favorites was Unlimited Steam, a parody of the memetic &ldquo;<a href="https://www.youtube.com/watch?v=4jXEuIHY9ic">Steamed Hams</a>&rdquo; scene from The Simpsons, except made infinite with AI generation. That may sound like a pointless idea — Steamed Hams has a very fixed plot — but it went off the rails even harder than AI Seinfeld ever did.</p>
<div style="position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden;">
      <iframe allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share; fullscreen" loading="eager" referrerpolicy="strict-origin-when-cross-origin" src="https://www.youtube-nocookie.com/embed/9i0L_IT82tA?autoplay=0&amp;controls=1&amp;end=0&amp;loop=0&amp;mute=0&amp;start=0" style="position: absolute; top: 0; left: 0; width: 100%; height: 100%; border:0;" title="YouTube video"></iframe>
    </div>

<h2 id="directing-ai-seinfeld">Directing AI Seinfeld</h2>
<p>AI Seinfeld was novel back in 2023, but now that LLMs are more mainstream you can probably figure out how the AI part of it worked, but let&rsquo;s do a refresher so we can figure out how a hypothetical future AI Seinfeld could innovate the algorithmic sitcom.</p>
<p>As noted earlier, the key of AI Seinfeld&rsquo;s success was the then-latest version of GPT-3: <code>text-davinci-003</code> and its then-novel instruction-based finetuning using RLHF. With that, you can give it a prompt such as:</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-txt" data-lang="txt"><span class="line"><span class="cl">You are a professional comedian. Write an award-winning script for an episode of Seinfeld about a new restaurant. Include audience laugh tracks when appropriate.
</span></span></code></pre></div><p>Due to the low context length of these earlier LLMs, that&rsquo;s essentially all the <a href="https://platform.openai.com/docs/guides/prompt-engineering">prompt engineering</a> you can do without limiting the length of the output. The model would then output something similar to this script (using the more modern <a href="https://www.anthropic.com/news/claude-3-5-sonnet">Claude 3.5 Sonnet</a> at <code>temperature=0.0</code>): <sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup></p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-txt" data-lang="txt"><span class="line"><span class="cl">[Scene: Jerry&#39;s apartment]
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">Jerry: So what&#39;s the deal with this new restaurant, &#34;The Blank Plate&#34;?
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">Elaine: Oh, I&#39;ve heard about that place! Apparently, you don&#39;t order anything - the chef just brings you whatever he feels like making.
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">Jerry: What? So you&#39;re telling me I have to eat whatever some stranger decides?
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">[Audience laughter]
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">George: (entering) Hey, guess where I&#39;m taking my date tonight? The Blank Plate!
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">Jerry: George, you can&#39;t take a date there! What if they serve something weird?
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">George: What do you mean?
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">Elaine: It&#39;s that new place where you don&#39;t get to choose your meal.
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">George: (panicking) Oh no, what have I done? She&#39;s going to think I&#39;m some kind of food weirdo!
</span></span></code></pre></div><p>One thing instruction-tuned LLMs are always good at is playing along: LLMs generate text sequentially without the explicit ability to plan ahead, so it must work with what it&rsquo;s given and what it has already generated. Coincidentally, this works <em>perfectly</em> with the improv comedy style of Seinfeld, where continuing the plot is more important than anything else, and the more ridiculous the situation becomes, that&rsquo;s even better. It&rsquo;s the rare case where <a href="https://www.iguazio.com/glossary/llm-hallucination/">LLM hallucination</a> is actually a feature, not a bug.</p>
<p>To get the LLM output into a format suitable for a Twitch stream, a programmatic script can then parse the output: extracting and mapping the characters and their lines, applause directions, and, of course, replacing all mentions of Jerry with Larry and Seinfeld with Feinberg. This workflow was surprisingly difficult at the time since GPT-3 did not have many techniques to control the format of the output, hence why I suspect there are awkward pauses and other glitches. Each line can then be passed to Azure&rsquo;s text-to-speech API to generate a distinct audio file, which can be played back in order in Unity.</p>
<p>In an <a href="https://www.polygon.com/23582937/ai-seinfeld-twitch-stream">interview with Polygon</a>, Skyler Hartle of Mismatch media noted the presence of a &ldquo;director&rdquo; which likely handles the camera, scene transitions, and the microwave:</p>
<blockquote>
<p>“In addition to the third party services we’ve used, we have a lot of proprietary generative algorithms that cause the show to be ‘formed’, so to be speak. We collectively call this logic the ‘director,’ as it is largely responsible for making sure all the individual pieces come together into a whole,” Hartle said via email. “It’s worth mentioning that we don’t generate the artwork or the laugh track — those are precanned assets, but we have ideas on how to do that in the future.”</p>
</blockquote>
<p>The AI aspect of AI Seinfeld was counterintuitively the easiest part of the pipeline, which explains how quickly variants popped up. However, with the inability to tweak the LLM output much with the technology at the time, the stream may have hit a creative limit.</p>
<h2 id="the-fall-of-ai-seinfeld">The Fall of AI Seinfeld</h2>
<p>Vice also <a href="https://www.vice.com/en/article/qjkyxp/whats-the-deal-with-nothing-forever-a-21st-century-seinfeld-that-is-ai-generated">interviewed</a> Hartle, who had an optimistic view of the future of AI Seinfeld:</p>
<blockquote>
<p>“Our grounding principle was, can we create a show that can generate entertaining content forever? Because that&rsquo;s truly where we see the future emerging towards. Our goal with the next iterations or next shows that we release is to actually trade a show that is like Netflix-level quality.”</p>
</blockquote>
<p>That&rsquo;s tempting fate a bit too much.</p>
<p>The reason AI Seinfeld fell out of favor is a case of unintentionally poor LLM testing. When the <code>text-davinci-003</code> model API endpoint had an outage, AI Seinfeld switched to a weaker GPT-3 model, <code>text-curie</code>, to keep the stream up. But unlike the davinci variant, curie was <em>not</em> RLHFed to follow instructions and safety.</p>
<p>During this brief period of low safety, one of Larry&rsquo;s AI-generated monologues <a href="https://www.vice.com/en/article/ai-generated-seinfeld-show-nothing-forever-banned-on-twitch-after-transphobic-standup-bit/">made a transphobic joke</a>: a type of joke that was unfortunately common during the 90&rsquo;s and has no place in modern society. Twitch banned the Watch Forever channel for 14 days as a result, completely killing the channel&rsquo;s growth momentum.</p>
<p>But when the ban concluded and AI Seinfeld came back, the show was changed significantly with a &ldquo;Season 2&rdquo;. Although AI Seinfeld was still about a group of friends hanging around talking about the latest gossip, all the characters were different and had new models, the sets were different, and instead of a comedy monologue, <del>Larry</del> Leo narrates writing a blog.</p>
<div style="position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden;">
      <iframe allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share; fullscreen" loading="eager" referrerpolicy="strict-origin-when-cross-origin" src="https://www.youtube-nocookie.com/embed/7N2Wgqn45FI?autoplay=0&amp;controls=1&amp;end=0&amp;loop=0&amp;mute=0&amp;start=0" style="position: absolute; top: 0; left: 0; width: 100%; height: 100%; border:0;" title="YouTube video"></iframe>
    </div>

<p>Why Mismatch Media made such a format shift is unclear: <a href="https://en.wikipedia.org/wiki/Occam%27s_razor">Occam&rsquo;s razor</a> would suggest that a copyright holder for Seinfeld sent a cease and desist to Mismatch Media given the bad publicity behind the original ban, despite the clearly fair-use parody nature of the stream. It&rsquo;s fair that it may not have been worth the time and effort for Mismatch Media to fight a legal battle for a fun art project.</p>
<p>The rebooted WatchMeForever stream is <a href="https://www.twitch.tv/watchmeforever">still active</a> as of today, but with effectively no viewers.</p>
<p>The immediate failure of the AI Seinfeld retool does lend credibility to the theory that the stream only became popular <em>because</em> it was about Seinfeld and that it was a novelty doomed to a short shelf life. Still, there were detractors that said <a href="https://www.businessinsider.com/ai-generated-seinfeld-parody-twitch-nothing-forever-streaming-transphobia-banned-2023-2">AI Seinfeld was never funny and everyone is weird for liking it</a>. That&rsquo;s ok: the original Seinfeld received similar complaints back in the day. <sup id="fnref:2"><a href="#fn:2" class="footnote-ref" role="doc-noteref">2</a></sup> But it&rsquo;s hard to argue that there wasn&rsquo;t interest in a 24/7 livestream of surreal AI-generated content.</p>
<h2 id="what-would-ai-seinfeld-look-like-in-2024">What Would AI Seinfeld Look Like in 2024?</h2>
<p>Now that we know how AI Seinfeld worked and what didn&rsquo;t work, how would a year&rsquo;s worth of exponential progress in generative AI look for AI Seinfeld? Could AI Seinfeld be improved and come back? The answer is <em>maybe</em>.</p>
<p>Modern generative AI requires a lot of cherry picking the best results, and it&rsquo;s surprisingly hard to do: both images and text can take multiple generations and still require significant human-guided edits. But with a Twitch livestream, there can&rsquo;t be any cherry picking at all, which means that the entire generation pipeline has to be consistent, and its failures interesting in the worst case.</p>
<p>The only reason AI Seinfeld worked at all is because GPT-3 was trained on the entire internet, likely including Seinfeld scripts and forum discussions. The prompt would need to have contained <code>Write a Seinfeld script</code> since if you asked it <code>Write a sitcom script</code>, it would output something completely generic instead and there isn&rsquo;t much room to customize the prompt to make it more interesting. The GPT-3 variant that AI Seinfeld used had a 4k token context window limit (combined both the input prompt plus the output script text), but modern LLMs eclipse that: currently, Claude 3.5 Sonnet has a <a href="https://docs.anthropic.com/en/docs/about-claude/models">200k input/8k output</a> context, while GPT-4o has a <a href="https://platform.openai.com/docs/models/gpt-4o">128k input/16k output</a> context! With that much freedom, you can define many more constraints in the prompt and guide the LLM into exactly the type of sitcom you want.</p>
<p>One simple example that doesn&rsquo;t require any knowledge of machine learning is a parametric prompt, where one aspect of a prompt can be replaced with a user-defined choice or programmatically chosen at random. Unlimited Steam, for example, used a parametric prompt to change the cosmic phenomenon localized entirely within Principal Skinner&rsquo;s kitchen from the Aurora Borealis to something as silly as <a href="https://en.wikipedia.org/wiki/Rapture">The Rapture</a> or <a href="https://foofighters.com">Foo Fighters</a>, and forcing the characters to play along improv-style. <sup id="fnref:3"><a href="#fn:3" class="footnote-ref" role="doc-noteref">3</a></sup></p>
<p>Recall the example Seinfeld prompt to Claude 3.5 Sonnet I made earlier, except instead of prompting for a Seinfeld script about a <code>new restaurant</code>, we can instead have the LLM generate a script about <code>generative AI in 2024</code> by making only that substitution in the prompt — a topic on which there definitely has never been a Seinfeld episode about:</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-txt" data-lang="txt"><span class="line"><span class="cl">George bursts in, looking frantic.
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">George: You guys! I think I&#39;m dating an AI!
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">[Audience laughter]
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">Jerry: What makes you say that?
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">George: Well, she&#39;s perfect in every way. She laughs at all my jokes, she has an encyclopedic knowledge of obscure trivia, and she never gets mad when I say something stupid.
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">[Audience laughter]
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">Jerry: George, that doesn&#39;t mean she&#39;s an AI. Maybe she just really likes you.
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">George: No, no. It&#39;s definitely an AI. No human woman would ever put up with me for this long.
</span></span></code></pre></div><p>Using modern LLMs, is there now a way to design a prompt which can make use of the long context windows? A prompt that can both leverage unique human writing and fix many of the issues that affected AI Seinfeld? Here&rsquo;s an approach at a much more sophisticated prompt, where all values in <code>{}</code> brackets are parameters that can be filled in:</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-txt" data-lang="txt"><span class="line"><span class="cl">You are a professional comedian. Write an award-winning script for a a scene for Act I of a three act hit sitcom episode. Include audience laugh tracks when appropriate.
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">Your script MUST incorporate ALL the following elements:
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">Background:
</span></span><span class="line"><span class="cl">- {background}
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">Setting:
</span></span><span class="line"><span class="cl">- {setting}
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">Characters:
</span></span><span class="line"><span class="cl">- {character_1}
</span></span><span class="line"><span class="cl">- {character_2}
</span></span><span class="line"><span class="cl">- {character_3}
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">Plots:
</span></span><span class="line"><span class="cl">- {a_plot}
</span></span><span class="line"><span class="cl">- {b_plot_1}
</span></span><span class="line"><span class="cl">- {b_plot_2}
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">The script MUST also follow the high-level comedic style of the following scripts:
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">- {script_1}
</span></span><span class="line"><span class="cl">- {script_2}
</span></span><span class="line"><span class="cl">- {script_3}
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">After the scene has concluded, output a summary of the scene.
</span></span></code></pre></div><p>Thanks to long context windows, the parametric changes don&rsquo;t have to be small, such as only a character name or two word setting. You, a human, can write <em>anything</em> to make each character distinct and robust, including name, gender, age, personality, likes, dislikes, etc. Plots can be derived from human-written scenarios beforehand: if you wrote 100 A-plots and 100 B-plots and randomly selected 1 A-plot and 2 B-plots, you&rsquo;d have about <em>1 million</em> possible plot permutations, ensuring you have something unique before the AI tries to reconcile them. You can feed in examples of human-written scripts to set the style and vibe of the generation in what is known as <a href="https://www.promptingguide.ai/techniques/fewshot">few-shot prompting</a>. You can maintain continuity over many scenes by having the LLM summarize its own output, and then feed those summaries back to the AI as background information to build upon them. The LLM can also be instructed to <a href="https://minimaxir.com/2023/12/chatgpt-structured-data/">output structured data</a> to avoid the need to loosely parse the script after it&rsquo;s completed, and as a bonus the model could be instructed to output additional metadata such as <a href="https://learn.microsoft.com/en-us/azure/ai-services/speech-service/speech-synthesis-markup-voice#use-speaking-styles-and-roles">SSML speech styles</a> based on a given line to add personality to the generated speech.</p>
<p>Unfortunately, creating this pipeline, writing original characters and plots for it for it, and sufficiently testing it to ensure the generated results are stable, would take weeks if not months to complete otherwise I would provide a more concrete demo. <sup id="fnref:4"><a href="#fn:4" class="footnote-ref" role="doc-noteref">4</a></sup> This pipeline approach to AI script writing would only be effective for unsupervised 24/7 generation and wouldn&rsquo;t replace skilled human writers who would do a more effective job much faster.</p>
<p>But would all of these prompt optimizations actually make the final generated script <em>funny</em>? After all, some of the failings like the awkward audience laughs and pauses and the end of scenes contributed to AI Seinfeld&rsquo;s humor. During a standup comedy event at AI Seinfeld&rsquo;s peak, Jerry Seinfeld himself <a href="https://www.reddit.com/r/seinfeld/comments/10tnn1k/jerry_talking_about_ai_seinfeld_last_night/">was asked</a> about the AI parody and he replied that he&rsquo;s not worried about AI:</p>
<blockquote>
<p>AI can be, definitely, they&rsquo;ll make it smarter and smarter, but to do [standup comedy] you have to make it dumber.</p>
</blockquote>
<p>Could AI Seinfeld benefit from advances in AI video? The answer this time is no. Generative video has been taking off in 2024 with projects such as OpenAI&rsquo;s <a href="https://openai.com/index/sora/">Sora</a> and Runway AI&rsquo;s <a href="https://runwayml.com/product">Gen-3 Alpha</a>, but those demos and the examples that go viral on social media are very heavily cherry picked, and even then there are consistency errors such as objects appearing in-and-out of existence. Generating video also requires exponentially more compute than just running Unity, and even with another few years of GPU hardware improvements it would be infeasible to cost-effectively create a 24/7 stream from those models.</p>
<div style="position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden;">
      <iframe allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share; fullscreen" loading="eager" referrerpolicy="strict-origin-when-cross-origin" src="https://www.youtube-nocookie.com/embed/mnpGyVL1-0E?autoplay=0&amp;controls=1&amp;end=0&amp;loop=0&amp;mute=0&amp;start=0" style="position: absolute; top: 0; left: 0; width: 100%; height: 100%; border:0;" title="YouTube video"></iframe>
    </div>

<p>The greatest problem with generative AI video is that it is coherent overall but has emblematic errors that don&rsquo;t require a keen eye to notice, and as a result falls square into the uncanny valley, with its mistakes not being interesting, but disorienting. Mistakes in motion are easier to notice at a glance than images where a person&rsquo;s hands may have the wrong number of fingers. The only way for AI video to get out of the valley would be to improve the model to near-flawless quality, which won&rsquo;t happen any time soon. But Sora is more on the more realistic side of the curve than the less realistic side.</p>
<figure>

    <img loading="lazy" srcset="/2024/08/ai-seinfeld/uncanny_valley_2_hu_c3c8932aea493423.webp 320w,/2024/08/ai-seinfeld/uncanny_valley_2_hu_85ea0e247ba12df1.webp 768w,/2024/08/ai-seinfeld/uncanny_valley_2_hu_7690c09cf64f5daa.webp 1024w,/2024/08/ai-seinfeld/uncanny_valley_2.webp 1200w" src="uncanny_valley_2.webp"/> 
</figure>

<p>What about the AI-generated voices that would power these characters? At the time AI Seinfeld aired, many complained that Larry&rsquo;s voice &ldquo;didn&rsquo;t sound enough like Jerry Seinfeld.&rdquo; After AI Seinfeld concluded, a new technology called <a href="https://elevenlabs.io/blog/what-is-voice-cloning">voice cloning</a> popularized by <a href="https://elevenlabs.io">ElevenLabs</a> went mainstream&hellip;and it&rsquo;s unexpectedly the AI modality that&rsquo;s causing the most actual harm both with creative projects and outside of them. If you haven&rsquo;t heard as much about AI-generated voices, there&rsquo;s a good reason for that: voice synthesis projects such as Microsoft&rsquo;s <a href="https://www.microsoft.com/en-us/research/project/vall-e-x/vall-e-2/">VALL-E 2</a> and Meta&rsquo;s <a href="https://ai.meta.com/blog/voicebox-generative-ai-model-speech/">Voicebox</a> both have disclaimers saying they won&rsquo;t be released due to the dangers the technology possesses, although Microsoft&rsquo;s Azure does offer a &ldquo;<a href="https://learn.microsoft.com/en-us/azure/ai-services/speech-service/custom-neural-voice">custom neural voice</a>&rdquo; service. Voice cloning has been used to <a href="https://www.newyorker.com/science/annals-of-artificial-intelligence/the-terrifying-ai-scam-that-uses-your-loved-ones-voice">initiate scams</a> by impersonating spouses in an emergency. Professional voice actors have had their voices cloned and used without compensation due to contracts not specifically forbidding the practice, which is one of the reasons SAG-AFTRA <a href="https://www.theverge.com/2024/8/5/24213808/video-game-voice-actor-strike-sag-aftra">just went on strike</a> against the video game industry in order to get protections against voice cloning and synthetic performers.</p>
<p>Moreover, in the context of creating a next-gen AI Seinfeld, there&rsquo;s nothing inherently interesting about voice cloning since it&rsquo;s a copy by definition: the model <em>can&rsquo;t</em> generate unexpectedly amusing content other than the inherent gimmick of famous-voice-saying-something, such as the AI George Carlin standup special <a href="https://www.vice.com/en/article/the-george-carlin-ai-standup-is-worse-than-you-can-imagine/">which was not special</a>. There isn’t any way currently to prompt engineer a voice generation AI with the detail to create a voice <code>in the style of a masculine New York comedian, 2x speed, primetime television quality</code> which could open up more creative opportunities.</p>
<p>Although we can make drastic improvements with the textual script, that&rsquo;s the extent of how new AI approaches can be leveraged to make something interesting. But if you remember the early days of generative AI history, the best AI-generated projects were the simplest.</p>
<h2 id="ai-weirdness">AI Weirdness</h2>
<p>Generative &ldquo;AI&rdquo; has been around for a very long time (I had fun with <a href="https://en.wikipedia.org/wiki/Markov_chain">Markov chains</a> <a href="https://minimaxir.com/2013/11/innovation-rng/">a decade ago</a>!), but the study was mostly confined to tech-focused communities like <a href="https://news.ycombinator.com">Hacker News</a>. Modern generative AI didn&rsquo;t break into mainstream culture until 2018, ironically in a way that doesn&rsquo;t involve actual generative AI. In June of that year, comedian Keaton Patti posted a <a href="https://x.com/KeatonPatti/status/1006961202998726665">megaviral tweet</a> about how he &ldquo;forced a bot to watch over 1,000 hours of Olive Garden commercials and then asked it to write an Olive Garden commercial of its own.&rdquo;</p>
<figure>

    <img loading="lazy" srcset="/2024/08/ai-seinfeld/patti_hu_67c737b47f76017.webp 320w,/2024/08/ai-seinfeld/patti_hu_615be4497d8ad163.webp 768w,/2024/08/ai-seinfeld/patti_hu_421617479726cf8c.webp 1024w,/2024/08/ai-seinfeld/patti.webp 1554w" src="patti.webp"
         alt="An excerpt of the viral Olive Garden script."/> <figcaption>
            <p>An excerpt of the viral Olive Garden script.</p>
        </figcaption>
</figure>

<p>Yes, the script was human-written: for the technology at the time, no one could train an AI to behave like that from only video input data, and the script was <em>too surreal</em> even for the now-primitive generative AI. He did get popular enough to get <a href="https://www.amazon.com/Forced-Bot-Write-This-Book/dp/152485834X">a book deal</a> and a <a href="https://www.youtube.com/playlist?list=PLXSrjGY5Tz_gPdaU_L__S3hXua7zRQtUl">Netflix collaboration</a> leveraging this fake-AI gimmick.</p>
<p>Patti&rsquo;s comedic misrepresentation of AI did lead to genuine confusion about what a 2018-era generative AI can actually do. Janelle Shane, who maintains the <a href="https://www.aiweirdness.com">AI Weirdness blog</a> about weird things AI can generate, posted an <a href="https://x.com/JanelleCShane/status/1007061610005794817">epic takedown</a> of Patti&rsquo;s script which went equally viral and also led to the internet discovering her excellent <a href="https://www.aiweirdness.com/candy-heart-messages-written-by-a-18-02-09/">AI-generated Valentine&rsquo;s Day hearts</a> from the same year (and later <a href="https://www.amazon.com/You-Look-Like-Thing-Love/dp/0316525227">a book deal</a> too):</p>
<figure>

    <img loading="lazy" srcset="/2024/08/ai-seinfeld/heart_hu_292dce043896cad3.webp 320w,/2024/08/ai-seinfeld/heart.jpg 640w" src="heart.jpg"/> 
</figure>

<p>Image-based generative AI took a lot longer to go mainstream: websites like <a href="https://thispersondoesnotexist.com">This Person Does Not Exist</a> demonstrated the power of <a href="https://en.wikipedia.org/wiki/Generative_adversarial_network">generative adversarial networks</a> like <a href="https://github.com/NVlabs/stylegan">StyleGAN</a> to create images, but that wasn&rsquo;t weird outside of <a href="https://cedar.buffalo.edu/~srihari/CSE676/22.3-GAN%20Mode%20Collapse.pdf">mode collapses</a>. The first instance of weird images from AI was in January 2021 when OpenAI announced the <a href="https://openai.com/index/dall-e/">original DALL·E</a> and showed they could make unique armchairs in the shape of an avocado by asking the model to do so, although they never released the model itself.</p>
<figure>

    <img loading="lazy" srcset="/2024/08/ai-seinfeld/avocado_hu_5300a7e486e7afb5.webp 320w,/2024/08/ai-seinfeld/avocado_hu_84e7cd0392309830.webp 768w,/2024/08/ai-seinfeld/avocado.webp 830w" src="avocado.webp"/> 
</figure>

<p>DALL·E didn&rsquo;t get much attention outside of the AI hypesters since no one could play with it, but months later, things changed. <a href="https://x.com/borisdayma">Boris Dayma</a> led an initiative to reproduce and open-source a variant of the DALL·E model, labeled <a href="https://github.com/borisdayma/dalle-mini">DALL·E Mini</a> (later changed to <a href="https://www.craiyon.com">Craiyon</a> after a cease and desist from OpenAI), and <a href="https://huggingface.co/spaces/dalle-mini/dalle-mini">hosted it for free on Hugging Face</a> and went megaviral. And thus began the &ldquo;<a href="https://www.reddit.com/r/weirddalle/top/?t=all">weird DALL·E</a>&rdquo; phase of image generation AI, where anyone could create incoherent images and make people laugh.</p>
<figure class="align-center ">

    <img loading="lazy" srcset="/2024/08/ai-seinfeld/firehydrant_hu_4bd881a786b7493e.webp 320w,/2024/08/ai-seinfeld/firehydrant.webp 764w" src="firehydrant.webp#center"
         alt="Even back in 2021, image prompt engineering was a thing. via /u/royal_rigolo on Reddit / weirddalle subreddit" width="400"/> <figcaption>
            <p>Even back in 2021, image prompt engineering was a thing. <a href="https://www.reddit.com/r/weirddalle/comments/vjwcl5/fire_hydrant_takes_selfies_on_top_of_the_himalaya/">via /u/royal_rigolo on Reddit / weirddalle subreddit</a></p>
        </figcaption>
</figure>

<p>All of these examples of interesting failures are representative of a bygone AI era of experimentation. Once everyone had free access to more powerful text-generating AI with ChatGPT, and more powerful image-generating AI with <a href="https://www.midjourney.com/home">Midjourney</a>, AI stopped being fun and started being serious business, for better or for worse.</p>
<figure>

    <img loading="lazy" srcset="/2024/08/ai-seinfeld/uncanny_valley_3_hu_c912a98f812d692e.webp 320w,/2024/08/ai-seinfeld/uncanny_valley_3_hu_6cd7aa3fb6bb5ee5.webp 768w,/2024/08/ai-seinfeld/uncanny_valley_3_hu_e3c7199e7c82d8bd.webp 1024w,/2024/08/ai-seinfeld/uncanny_valley_3.webp 1200w" src="uncanny_valley_3.webp"/> 
</figure>

<h2 id="ai-generated-content-in-20xx">AI-Generated Content in 20XX</h2>
<p>Last year, I wrote a thought piece titled &ldquo;<a href="https://minimaxir.com/2023/10/ai-sturgeons-law/">The Greatest Threat to Generative AI is Humans Being Bad at Using it</a>&rdquo; in response to the increasing hostility against the use of AI in creative works, arguing that while AI is a tool like anything else, it is a tool that&rsquo;s very easy to use poorly and actually make projects worse. Additionally, the largest AI companies have both a business incentive and a duty to ensure that AI is used responsibly by its users downstream, as otherwise it will hurt the industry in the long term.</p>
<p>Now, it&rsquo;s apparent that I was correct. The large companies went full steam ahead on AI integrations even where it is highly questionable that they add value and productivity to the end-user, often signaled with a &ldquo;magical&rdquo; <a href="https://qz.com/how-became-the-unofficial-ai-emoji-1851059332">sparkle emoji</a>. Google has integrated Gemini to assist with document and email writing, Meta has integrated Meta AI to automatically generate images and comments, and Apple will <a href="https://www.bloomberg.com/news/articles/2024-07-28/apple-intelligence-to-miss-initial-release-of-upcoming-ios-18-ipados-overhauls?embedded-checkout=true">soon</a> allow Apple devices to generate text and images on your personal devices using Apple Intelligence. Marketing these features is typically met with backlash: Google had to <a href="https://www.cnbc.com/2024/08/02/google-pulls-ai-ad-for-olympics-following-backlash.html">pull an Olympics commercial</a> which encouraged a parent to use AI to write a letter for their child.</p>
<div style="position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden;">
      <iframe allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share; fullscreen" loading="eager" referrerpolicy="strict-origin-when-cross-origin" src="https://www.youtube-nocookie.com/embed/NgtHJKn0Mck?autoplay=0&amp;controls=1&amp;end=0&amp;loop=0&amp;mute=0&amp;start=0" style="position: absolute; top: 0; left: 0; width: 100%; height: 100%; border:0;" title="YouTube video"></iframe>
    </div>

<blockquote>
<p>“I flatly reject the future that Google is advertising,” Shelly Palmer, professor of advanced media at Syracuse University’s S.I. Newhouse School of Public Communications, wrote in a widely circulated <a href="https://shellypalmer.com/2024/07/why-googles-dear-sydney-ad-makes-me-want-to-scream/">blog post</a>. The technology presents a “monocultural future where we see fewer and fewer examples of original human thoughts,” she wrote.</p>
</blockquote>
<p>In the process of pushing AI tech further mainstream in a rush to demonstrate to shareholders their generative AI capabilities without encouraging <em>responsible</em> usage of the technology, AI has entered a new era of &ldquo;<a href="https://simonwillison.net/2024/May/8/slop/">slop</a>&rdquo; where people post objectively bad AI content without any regard for how it will be perceived, especially for websites which rely on user-generated content.</p>
<figure>

    <img loading="lazy" srcset="/2024/08/ai-seinfeld/pinterest_hu_613e5e7f10764361.webp 320w,/2024/08/ai-seinfeld/pinterest_hu_fb37af21ee91c34f.webp 768w,/2024/08/ai-seinfeld/pinterest.webp 901w" src="pinterest.webp"
         alt="An annotated example of the Pinterest home page from July 2024. via @henningsanden on X"/> <figcaption>
            <p>An annotated example of the Pinterest home page from July 2024. <a href="https://x.com/henningsanden/status/1808126786389037107">via @henningsanden on X</a></p>
        </figcaption>
</figure>

<p>Facebook, whose algorithm <a href="https://transparency.meta.com/data/widely-viewed-content-report/">favors</a> emotionally-appealing engagement bait posts, has seen a deluge of high-engagement slop even when the content makes no logical sense.</p>
<figure class="align-center ">

    <img loading="lazy" srcset="/2024/08/ai-seinfeld/cabincrew_hu_bc23e6989111247c.webp 320w,/2024/08/ai-seinfeld/cabincrew_hu_c696ff0db8c80eff.webp 768w,/2024/08/ai-seinfeld/cabincrew_hu_b68182f34bfe5d01.webp 1024w,/2024/08/ai-seinfeld/cabincrew.webp 1080w" src="cabincrew.webp#center"
         alt="One of the few AI-generated images on Facebook with an actual cabin crew. via @FacebookAIslop on X." width="400"/> <figcaption>
            <p>One of the few AI-generated images on Facebook with an actual cabin crew. <a href="https://x.com/FacebookAIslop/status/1806416249259258189">via @FacebookAIslop on X</a>.</p>
        </figcaption>
</figure>

<p>This is, of course, quintessential uncanny valley: it&rsquo;s coherent at a glance but just even looking at it for a second it&rsquo;s obvious where the issues are, and these issues aren&rsquo;t a good kind of AI weirdness. What worse is that AI Slop a regression in realism, and falls onto the left side of the valley.</p>
<figure>

    <img loading="lazy" srcset="/2024/08/ai-seinfeld/uncanny_valley_4_hu_ce80aacfa47a581e.webp 320w,/2024/08/ai-seinfeld/uncanny_valley_4_hu_ffbc52f347062d8f.webp 768w,/2024/08/ai-seinfeld/uncanny_valley_4_hu_8f8817dd988ae0a9.webp 1024w,/2024/08/ai-seinfeld/uncanny_valley_4.webp 1200w" src="uncanny_valley_4.webp"/> 
</figure>

<p>Although we as humans can identify this slop, it is currently surprisingly hard for an AI to do so, although it hasn&rsquo;t stopped people from trying to build AIs that can detect AIs which in practice is filled with false positives that hurt real creatives. For slop-creators, this is a feature: if an AI company released a tool to reliably detect and punish slop, it would make their generative AI less valuable. It&rsquo;s <a href="https://www.wsj.com/tech/ai/openai-tool-chatgpt-cheating-writing-135b755a">reported</a> that one of the reasons that OpenAI won&rsquo;t release a reliable ChatGPT text detector is that it could harm their business.</p>
<p>The core reason for the big tech companies allowing generative AI to cause the <a href="https://en.wikipedia.org/wiki/Enshittification">enshittification</a> of the internet is misaligned incentives between the companies hosting AI slop and the users viewing it. Social media companies and their shareholders care about <a href="https://mixpanel.com/blog/north-star-metric/">North Star metrics</a> such as user retention and time-on-site, and normally those metrics can be correlated with user happiness and satisfaction with the service. But time-on-site, for example, can <em>also</em> be maximized by making the site harder and slower to use, and the deluge of AI slop accomplishes that. AI companies typically don&rsquo;t have analytics tracking negative user sentiment about their use of AI: if anything, the uncompromising backlash against AI convinces the companies that complainers are just a lost demographic to accommodate and double down on what they&rsquo;re already doing. Aggregate metrics treat human-made content and AI-generated content as equal, but <em>humans</em> do not.</p>
<p>Generative AI, even for researchers and practitioners such as myself, is a heavily nuanced topic that is very difficult to communicate succinctly, more difficult to do on social media which highly discourages nuance and context, and <em>even more difficult</em> as AI hypesters muddy the waters with misleading praises of generative AI such that they&rsquo;re easy to dunk on which just gets them more engagement and revenue. &ldquo;Made by AI&rdquo; is now a term that inspires dread, far from the Keaton Patti days where made-by-AI was an indicator of joyful weirdness. Bashing AI is now a meme, and there&rsquo;s isn&rsquo;t a single potential AI project that could challenge that perception because the well is poisoned beyond repair.</p>
<h2 id="would-a-247-ai-generated-twitch-stream-even-work-anymore">Would a 24/7 AI-Generated Twitch Stream Even Work Anymore?</h2>
<p>How does the modern AI backlash tie back into AI Seinfeld? Twitch&rsquo;s core demographic is the same demographic as those most against the use of generative AI. Part of the reason AI Seinfeld became so successful on Twitch is because of the community it cultivated: it wouldn&rsquo;t have gone viral if people weren&rsquo;t spamming microwave <code>MMM</code>s and and answering what did the fish say when it hit the wall. Even though Twitch viewers are mostly lurkers and not chatters, a channel with a good community builds word-of-mouth even outside of Twitch, which is how Twitch channels go viral.</p>
<p>I decided to determine what it would take to produce a &ldquo;fixed&rdquo; AI Seinfeld in 2024, given both the advances in AI and the ethics involved. Now, it&rsquo;s definitely not anything a scrappy group of hackers could do anymore. Sure, you could once again ask an LLM to generate a sitcom script and get a bunch of assets from the Unity Asset Store, but <em>that&rsquo;s already been done before</em>. In order to overcome the reflexive assumption that new AI generated content is slop, the stream would have to be something completely novel and unexpected: you can&rsquo;t, for example, just do an AI <a href="https://en.wikipedia.org/wiki/Curb_Your_Enthusiasm">Curb Your Enthusiasm</a>.</p>
<p>The script would be unique following from my demo of detailed parametric prompts, but it would require production-studio-class tracking and documentation for how the prompts and their parameters are used to codify said uniqueness. The stream video would still need to be rendered in Unity or another engine, but in order to be unique it would require commissioning human-made visuals and sound effects: given the animosity against those who work with AI, most artists would not accept those commissions even if they were paid at a significant premium. <sup id="fnref:5"><a href="#fn:5" class="footnote-ref" role="doc-noteref">5</a></sup> The voices would still have to be from an existing text-to-speech voice provider: voice cloning is right out, even with explicit consent and compensation for the voice actors.</p>
<p>And even if all the assets were fully sourced ethically with transparent documentation for the entire pipeline, the stream&rsquo;s Twitch chat would likely be derailed by <code>AI 👏 ART 👏 IS 👏 THEFT</code> spam, preventing the establishment of any community, and strict moderation to curb the spam risks causing a <a href="https://en.wikipedia.org/wiki/Streisand_effect">Streisand effect</a>.</p>
<p>The only entities that could feasibly create a 24/7 AI-generated livestream with fully ethically-sourced content would be, ironically, the big AI companies such as OpenAI which can afford to pay licenses for said data. Even <a href="https://www.disney.com">Disney</a>, which owns more than enough IP to train generative models of all modalities, would never do an AI Seinfeld-esque livestream for <a href="https://en.wikipedia.org/wiki/Brand_safety">brand safety</a> reasons alone: the nonzero possibility of a Disney character unexpectedly saying something problematic during the stream would make the entire project a complete nonstarter.</p>
<h2 id="whats-the-deal-with-the-uncanny-valley">What&rsquo;s the deal with the uncanny valley?</h2>
<p>One of the common criticisms about generative AI pointed out by creatives is &ldquo;if AI is trained on all human works, then how can it create anything new&rdquo;? AI Seinfeld is the perfect counterargument: even though it&rsquo;s powered by a LLM, the <em>humans</em> behind it are what made it go viral. Even before ChatGPT, generative AI has always excelled as a tool. The microwave gag and the 144p visual filter were not AI-generated or an attempt to emulate aspects of the Seinfeld sitcom: they were distinct creative decisions that made the entire project more interesting, and they aren&rsquo;t something that you could prompt an AI to suggest to add. AI Seinfeld in hindsight was an ethical form of AI-generated media: it did not replace Seinfeld the TV show, no one would stop watching streams of Seinfeld in favor of the AI-generated alternative, and copyright holders and Jerry Seinfeld did not lose revenue due to AI Seinfeld&rsquo;s existence: if anything, the nostalgic buzz increased streams of the original show.</p>
<p>With the current trajectory of AI slop and the perverse incentives by large tech companies to not address it, I am pessimistic that AI content will ever be at a state where it will cross that final hump of the uncanny valley curve into full acceptance, and even more pessimistic about the backlash against generative AI ever subsiding. With generative model training now at the point where it requires exponentially more compute and data for increasingly marginal returns, it will take years if at all for generative AI output to reach the far right of the uncanny valley chart, and unless the large tech companies actually create an <a href="https://en.wikipedia.org/wiki/Artificial_general_intelligence">AGI</a>, they are unlikely to obtain higher acceptability than AI Seinfeld ever did.</p>
<p>I wrote most of this blog post weeks ago but held off publishing it because new AI news kept happening. Most notably, the <a href="https://blackforestlabs.ai/our-team/">creators of Stable Diffusion</a> just released the <a href="https://blackforestlabs.ai">FLUX.1 series</a> of generative image AI models, which presents substantially improved coherence both to the provided prompt and within the image itself. Some of the variants are <a href="https://huggingface.co/black-forest-labs/FLUX.1-dev">open-source</a>, allowing the community to finetune them. The <a href="https://huggingface.co/XLabs-AI/flux-RealismLora">XLabs-AI/flux-RealismLora</a> in particular focuses on realism as it name implies, and <a href="https://www.reddit.com/r/StableDiffusion/comments/1emrprx/feel_the_difference_between_using_flux_with">one demo</a> from that finetune <a href="https://x.com/rpnickson/status/1821634114274873850">went megaviral</a>.</p>
<figure class="align-center ">

    <img loading="lazy" srcset="/2024/08/ai-seinfeld/flux_hu_f2586697cc180453.webp 320w,/2024/08/ai-seinfeld/flux.webp 664w" src="flux.webp#center"
         alt="One of the viral realism demo images: it does not have a dreamy look as other AI images but contextually expected stage lighting, the background and lanyard text is legible despite the depth-of-field blur, and body proportions are mostly correct except the long fingers. via /u/Glittering-Football9 on Reddit / StableDiffusion subreddit." width="400"/> <figcaption>
            <p>One of the viral realism demo images: it does not have a dreamy look as other AI images but contextually expected stage lighting, the background and lanyard text is legible despite the depth-of-field blur, and body proportions are mostly correct except the long fingers. <a href="https://www.reddit.com/r/StableDiffusion/comments/1emrprx/comment/lh30hvv/">via /u/Glittering-Football9 on Reddit / StableDiffusion subreddit</a>.</p>
        </figcaption>
</figure>

<p>That example in my opinion is more real than Sora but given the mixed reactions to the image, it&rsquo;s right at the acceptability = 0 threshold.</p>
<figure>

    <img loading="lazy" srcset="/2024/08/ai-seinfeld/uncanny_valley_5_hu_c33303ff9d736da6.webp 320w,/2024/08/ai-seinfeld/uncanny_valley_5_hu_d0b5c2c50072b2b0.webp 768w,/2024/08/ai-seinfeld/uncanny_valley_5_hu_7eb161e4aba72dd1.webp 1024w,/2024/08/ai-seinfeld/uncanny_valley_5.webp 1200w" src="uncanny_valley_5.webp"/> 
</figure>

<p>The generative AI bell cannot be unrung. As you can tell from this post, I personally try to thread the thin line between both cool applications of generative AI (at the risk of getting harrassed) and the problems generative AI can cause (also at the risk of getting harrassed) because it&rsquo;s important to shine a light on what&rsquo;s actually possible with AI when the misinformation around generative AI is only increasing. It&rsquo;s overall a big bummer how we went from weird Valentine&rsquo;s Day hearts, to a quirky livestream of a group of AI-generated friends, to what AI is now.</p>
<div class="footnotes" role="doc-endnotes">
<hr>
<ol>
<li id="fn:1">
<p>All of the examples in this post use LLM APIs as they provide the customization necessary to get effective results: the results for asking the same prompts to free chat frontends such as chatgpt.com will be substantially different.&#160;<a href="#fnref:1" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:2">
<p>When I was younger, I actually didn&rsquo;t like Seinfeld and instead preferred to watch <a href="https://en.wikipedia.org/wiki/Everybody_Loves_Raymond">Everybody Loves Raymond</a>.&#160;<a href="#fnref:2" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:3">
<p>Incidentally, parametric prompts is why Unlimited Steam got <a href="https://www.reddit.com/r/unlimitedsteam/comments/12wto93/thank_you_for_enjoying_the_steam/">permanently banned</a> from Twitch: in what would now be known as a <a href="https://www.ibm.com/topics/prompt-injection">prompt injection</a>, one of the GitHub-hosted lists the channel sourced thousands of food choices for the prompt contained a few highly offensive selections.&#160;<a href="#fnref:3" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:4">
<p>Prompt engineering instability grows exponentially as the prompt size increases since each part of the prompt has to relate to each other. Claude 3.5 Sonnet is the first LLM I&rsquo;ve tested that can handle super-long bespoke prompts and can actually account for all aspects of the prompt.&#160;<a href="#fnref:4" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
<li id="fn:5">
<p>To be fully ethical, an AI practitioner would have to proactively offer additional contractual guarantees to creatives they are commissioning, including highly-scoped usage of the assets they provide and a clause to not train generative AI on said assets to avoid future business.&#160;<a href="#fnref:5" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
</ol>
</div>
]]></content:encoded>
    </item>
    <item>
      <title>Benchmarking TensorFlow on Cloud CPUs: Cheaper Deep Learning than Cloud GPUs</title>
      <link>https://minimaxir.com/2017/07/cpu-or-gpu/</link>
      <pubDate>Wed, 05 Jul 2017 09:00:00 -0700</pubDate>
      <guid>https://minimaxir.com/2017/07/cpu-or-gpu/</guid>
      <description>Using CPUs instead of GPUs for deep learning training in the cloud is cheaper because of the massive cost differential afforded by preemptible instances.</description>
      <content:encoded><![CDATA[<p>I&rsquo;ve been working on a few personal deep learning projects with <a href="https://github.com/fchollet/keras">Keras</a> and <a href="https://www.tensorflow.org">TensorFlow</a>. However, training models for deep learning with cloud services such as <a href="https://aws.amazon.com/ec2/">Amazon EC2</a> and <a href="https://cloud.google.com/compute/">Google Compute Engine</a> isn&rsquo;t free, and as someone who is currently unemployed, I have to keep an eye on extraneous spending and be as cost-efficient as possible (please support my work on <a href="https://www.patreon.com/minimaxir">Patreon</a>!). I tried deep learning on the cheaper CPU instances instead of GPU instances to save money, and to my surprise, my model training was only slightly slower. As a result, I took a deeper look at the pricing mechanisms of these two types of instances to see if CPUs are more useful for my needs.</p>
<p>The <a href="https://cloud.google.com/compute/pricing#gpus">pricing of GPU instances</a> on Google Compute Engine starts at <strong>$0.745/hr</strong> (by attaching a $0.700/hr GPU die to a $0.045/hr n1-standard-1 instance). A couple months ago, Google <a href="https://cloudplatform.googleblog.com/2017/05/Compute-Engine-machine-types-with-up-to-64-vCPUs-now-ready-for-your-production-workloads.html">announced</a> CPU instances with up to 64 vCPUs on the modern Intel <a href="https://en.wikipedia.org/wiki/Skylake_%28microarchitecture%29">Skylake</a> CPU architecture. More importantly, they can also be used in <a href="https://cloud.google.com/compute/docs/instances/preemptible">preemptible CPU instances</a>, which live at most for 24 hours on GCE and can be terminated at any time (very rarely), but cost about <em>20%</em> of the price of a standard instance. A preemptible n1-highcpu-64 instance with 64 vCPUs and 57.6GB RAM plus the premium for using Skylake CPUs is <strong>$0.509/hr</strong>, about 2/3rds of the cost of the GPU instance.</p>
<p>If the model training speed of 64 vCPUs is comparable to that of a GPU (or even slightly slower), it would be more cost-effective to use the CPUs instead. But that&rsquo;s assuming the deep learning software and the GCE platform hardware operate at 100% efficiency; if they don&rsquo;t (and they likely don&rsquo;t), there may be <em>even more savings</em> by scaling down the number of vCPUs and cost accordingly (a 32 vCPU instance with same parameters is half the price at <strong>$0.254/hr</strong>, 16 vCPU at <strong>$0.127/hr</strong>, etc).</p>
<p>There aren&rsquo;t any benchmarks for deep learning libraries with tons and tons of CPUs since there&rsquo;s no demand, as GPUs are the <a href="https://en.wikipedia.org/wiki/Occam%27s_razor">Occam&rsquo;s razor</a> solution to deep learning hardware. But what might make counterintuitive but economical sense is to use CPUs instead of GPUs for deep learning training because of the massive cost differential afforded by preemptible instances, thanks to Google&rsquo;s <a href="https://en.wikipedia.org/wiki/Economies_of_scale">economies of scale</a>.</p>
<h2 id="setup">Setup</h2>
<p>I already have <a href="https://github.com/minimaxir/deep-learning-cpu-gpu-benchmark">benchmarking scripts</a> of real-world deep learning use cases, <a href="https://github.com/minimaxir/keras-cntk-docker">Docker container environments</a>, and results logging from my <a href="http://minimaxir.com/2017/06/keras-cntk/">TensorFlow vs. CNTK article</a>. A few minor tweaks allow the scripts to be utilized for both CPU and GPU instances by setting CLI arguments. I also rebuilt <a href="https://github.com/minimaxir/keras-cntk-docker/blob/master/Dockerfile">the Docker container</a> to support the latest version of TensorFlow (1.2.1), and created a <a href="https://github.com/minimaxir/keras-cntk-docker/blob/master/Dockerfile-cpu">CPU version</a> of the container which installs the CPU-appropriate TensorFlow library instead.</p>
<p>There is a notable CPU-specific TensorFlow behavior; if you install from <code>pip</code> (as the<a href="https://www.tensorflow.org/install/"> official instructions</a> and tutorials recommend) and begin training a model in TensorFlow, you&rsquo;ll see these warnings in the console:</p>
<figure>

    <img loading="lazy" srcset="/2017/07/cpu-or-gpu/tensorflow-console_hu_e436e066e4e1304d.webp 320w,/2017/07/cpu-or-gpu/tensorflow-console_hu_ce5df372394290b4.webp 768w,/2017/07/cpu-or-gpu/tensorflow-console_hu_9e354816d97d6c8f.webp 1024w,/2017/07/cpu-or-gpu/tensorflow-console.png 1130w" src="tensorflow-console.png"/> 
</figure>

<p>In order to fix the warnings and benefit from these <a href="https://en.wikipedia.org/wiki/SSE4#SSE4.2">SSE4.2</a>/<a href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions">AVX</a>/<a href="https://en.wikipedia.org/wiki/FMA_instruction_set">FMA</a> optimizations, we <a href="https://stackoverflow.com/questions/41293077/how-to-compile-tensorflow-with-sse4-2-and-avx-instructions">compile TensorFlow from source</a>, and I created a <a href="https://github.com/minimaxir/keras-cntk-docker/blob/master/Dockerfile-cpu-compiled">third Docker container</a> to do just that. When training models in the new container, <a href="https://github.com/tensorflow/tensorflow/issues/10689">most</a> of the warnings no longer show, and (spoiler alert) there is indeed a speed boost in training time.</p>
<p>Therefore, we can test three major cases with Google Compute Engine:</p>
<ul>
<li>A Tesla K80 GPU instance.</li>
<li>A 64 Skylake vCPU instance where TensorFlow is installed via <code>pip</code> (along with testings at 8/16/32 vCPUs).</li>
<li>A 64 Skylake vCPU instance where TensorFlow is compiled (<code>cmp</code>) with CPU instructions (+ 8/16/32 vCPUs).</li>
</ul>
<h2 id="results">Results</h2>
<p>For each model architecture and software/hardware configuration, I calculate the <strong>total training time relative to the GPU instance training</strong> for running the model training for the provided test script. In all cases, the GPU <em>should</em> be the fastest training configuration, and systems with more processors should train faster than those with fewer processors.</p>
<p>Let&rsquo;s start using the <a href="http://yann.lecun.com/exdb/mnist/">MNIST dataset</a> of handwritten digits plus the common multilayer perceptron (MLP) architecture, with dense fully-connected layers. Lower training time is better. All configurations below the horizontal dotted line are better than GPUs; all configurations above the dotted line are worse than GPUs.</p>
<figure>

    <img loading="lazy" srcset="/2017/07/cpu-or-gpu/dl-cpu-gpu-5_hu_8cf5154f974aed3c.webp 320w,/2017/07/cpu-or-gpu/dl-cpu-gpu-5_hu_2ec21aba02d8fb37.webp 768w,/2017/07/cpu-or-gpu/dl-cpu-gpu-5_hu_7682d0a58ea1e871.webp 1024w,/2017/07/cpu-or-gpu/dl-cpu-gpu-5.png 1200w" src="dl-cpu-gpu-5.png"/> 
</figure>

<p>Here, the GPU is the fastest out of all the platform configurations, but there are other curious trends: the performance between 32 vCPUs and 64 vCPUs is similar, and the compiled TensorFlow library is indeed a significant improvement in training speed <em>but only for 8 and 16 vCPUs</em>. Perhaps there are overheads negotiating information between vCPUs that eliminate the performance advantages of more vCPUs, and perhaps these overheads are <em>different</em> with the CPU instructions of the compiled TensorFlow. In the end, it&rsquo;s a <a href="https://en.wikipedia.org/wiki/Black_box">black box</a>, which is why I prefer black box benchmarking all configurations of hardware instead of theorycrafting.</p>
<p>Since the difference between training speeds of different vCPU counts is minimal, there is definitely an advantage by scaling down. For each model architecture and configuration, I calculate a <strong>normalized training cost relative to the cost of GPU instance training</strong>. Because GCE instance costs are prorated (unlike Amazon EC2), we can simply calculate experiment cost by multiplying the total number of seconds the experiment runs by the cost of the instance (per second). Ideally, we want to <em>minimize</em> cost.</p>
<figure>

    <img loading="lazy" srcset="/2017/07/cpu-or-gpu/dl-cpu-gpu-6_hu_c6ff3c375435199.webp 320w,/2017/07/cpu-or-gpu/dl-cpu-gpu-6_hu_6bee6729ce48517c.webp 768w,/2017/07/cpu-or-gpu/dl-cpu-gpu-6_hu_ea518ff15e46de10.webp 1024w,/2017/07/cpu-or-gpu/dl-cpu-gpu-6.png 1200w" src="dl-cpu-gpu-6.png"/> 
</figure>

<p>Lower CPU counts are <em>much</em> more cost-effective for this problem, when going as low as possible is better.</p>
<p>Now, let&rsquo;s look at the same dataset with a convolutional neural network (CNN) approach for digit classification:</p>
<figure>

    <img loading="lazy" srcset="/2017/07/cpu-or-gpu/dl-cpu-gpu-7_hu_d3205561da4ed49c.webp 320w,/2017/07/cpu-or-gpu/dl-cpu-gpu-7_hu_ae81ceba7d6092e6.webp 768w,/2017/07/cpu-or-gpu/dl-cpu-gpu-7_hu_7a29bcea36dbe20e.webp 1024w,/2017/07/cpu-or-gpu/dl-cpu-gpu-7.png 1200w" src="dl-cpu-gpu-7.png"/> 
</figure>

<figure>

    <img loading="lazy" srcset="/2017/07/cpu-or-gpu/dl-cpu-gpu-8_hu_64f1eac6ff5b2b3f.webp 320w,/2017/07/cpu-or-gpu/dl-cpu-gpu-8_hu_c6dd20c1ccc111a5.webp 768w,/2017/07/cpu-or-gpu/dl-cpu-gpu-8_hu_2fa65c3c187723bb.webp 1024w,/2017/07/cpu-or-gpu/dl-cpu-gpu-8.png 1200w" src="dl-cpu-gpu-8.png"/> 
</figure>

<p>GPUs are unsurprisingly more than twice as fast as any CPU approach at CNNs, but cost structures are still the same, except that 64 vCPUs are <em>worse</em> than GPUs cost-wise, with 32 vCPUs training even faster than with 64 vCPUs.</p>
<p>Let&rsquo;s go deeper with CNNs and look at the <a href="https://www.cs.toronto.edu/%7Ekriz/cifar.html">CIFAR-10</a> image classification dataset, and a model which utilizes a deep covnet + a multilayer perceptron and ideal for image classification (similar to the <a href="https://gist.github.com/baraldilorenzo/07d7802847aaad0a35d3">VGG-16</a> architecture).</p>
<figure>

    <img loading="lazy" srcset="/2017/07/cpu-or-gpu/dl-cpu-gpu-9_hu_4a5cd8ba80674837.webp 320w,/2017/07/cpu-or-gpu/dl-cpu-gpu-9_hu_a81280d52893c1c9.webp 768w,/2017/07/cpu-or-gpu/dl-cpu-gpu-9_hu_af30edd0d3117cd8.webp 1024w,/2017/07/cpu-or-gpu/dl-cpu-gpu-9.png 1200w" src="dl-cpu-gpu-9.png"/> 
</figure>

<figure>

    <img loading="lazy" srcset="/2017/07/cpu-or-gpu/dl-cpu-gpu-10_hu_a6061eb15b5b8609.webp 320w,/2017/07/cpu-or-gpu/dl-cpu-gpu-10_hu_fe0751d9cd60a655.webp 768w,/2017/07/cpu-or-gpu/dl-cpu-gpu-10_hu_a371016369278a9a.webp 1024w,/2017/07/cpu-or-gpu/dl-cpu-gpu-10.png 1200w" src="dl-cpu-gpu-10.png"/> 
</figure>

<p>Similar behaviors as in the simple CNN case, although in this instance all CPUs perform better with the compiled TensorFlow library.</p>
<p>The fasttext algorithm, used here on the <a href="http://ai.stanford.edu/%7Eamaas/data/sentiment/">IMDb reviews dataset</a> to determine whether a review is positive or negative, classifies text extremely quickly relative to other methods.</p>
<figure>

    <img loading="lazy" srcset="/2017/07/cpu-or-gpu/dl-cpu-gpu-3_hu_12d55d02148bf0ea.webp 320w,/2017/07/cpu-or-gpu/dl-cpu-gpu-3_hu_aaf9917a1629214f.webp 768w,/2017/07/cpu-or-gpu/dl-cpu-gpu-3_hu_d51ed2e2c6fdec60.webp 1024w,/2017/07/cpu-or-gpu/dl-cpu-gpu-3.png 1200w" src="dl-cpu-gpu-3.png"/> 
</figure>

<figure>

    <img loading="lazy" srcset="/2017/07/cpu-or-gpu/dl-cpu-gpu-4_hu_6b591a471f3027a4.webp 320w,/2017/07/cpu-or-gpu/dl-cpu-gpu-4_hu_7cc361b383b25fb0.webp 768w,/2017/07/cpu-or-gpu/dl-cpu-gpu-4_hu_4c516e76a92eff3c.webp 1024w,/2017/07/cpu-or-gpu/dl-cpu-gpu-4.png 1200w" src="dl-cpu-gpu-4.png"/> 
</figure>

<p>In this case, GPUs are much, much faster than CPUs. The benefit of lower numbers of CPU isn&rsquo;t as dramatic; although as an aside, the <a href="https://github.com/facebookresearch/fastText">official fasttext implementation</a> is <em>designed</em> for large amounts of CPUs and handles parallelization much better.</p>
<p>The Bidirectional long-short-term memory (LSTM) architecture is great for working with text data like IMDb reviews, but after my previous benchmark article, <a href="https://news.ycombinator.com/item?id=14538086">commenters on Hacker News</a> noted that TensorFlow uses an inefficient implementation of the LSTM on the GPU, so perhaps the difference will be more notable.</p>
<figure>

    <img loading="lazy" srcset="/2017/07/cpu-or-gpu/dl-cpu-gpu-1_hu_4369b4e9e8856507.webp 320w,/2017/07/cpu-or-gpu/dl-cpu-gpu-1_hu_3e65077eb16928e4.webp 768w,/2017/07/cpu-or-gpu/dl-cpu-gpu-1_hu_d736592c927bd764.webp 1024w,/2017/07/cpu-or-gpu/dl-cpu-gpu-1.png 1200w" src="dl-cpu-gpu-1.png"/> 
</figure>

<figure>

    <img loading="lazy" srcset="/2017/07/cpu-or-gpu/dl-cpu-gpu-2_hu_d8c58f429f4a781b.webp 320w,/2017/07/cpu-or-gpu/dl-cpu-gpu-2_hu_1306d728b4fce90.webp 768w,/2017/07/cpu-or-gpu/dl-cpu-gpu-2_hu_ad3d19e88738d072.webp 1024w,/2017/07/cpu-or-gpu/dl-cpu-gpu-2.png 1200w" src="dl-cpu-gpu-2.png"/> 
</figure>

<p>Wait, what? GPU training of Bidirectional LSTMs is <em>twice as slow</em> as any CPU configuration? Wow. (In fairness, the benchmark uses the Keras LSTM default of <code>implementation=0</code> which is better on CPUs while <code>implementation=2</code> is better on GPUs, but it shouldn&rsquo;t result in that much of a differential)</p>
<p>Lastly, LSTM text generation of <a href="https://en.wikipedia.org/wiki/Friedrich_Nietzsche">Nietzsche&rsquo;s</a> <a href="https://s3.amazonaws.com/text-datasets/nietzsche.txt">writings</a> follows similar patterns to the other architectures, but without the drastic hit to the GPU.</p>
<figure>

    <img loading="lazy" srcset="/2017/07/cpu-or-gpu/dl-cpu-gpu-11_hu_d84b78ad35a1f056.webp 320w,/2017/07/cpu-or-gpu/dl-cpu-gpu-11_hu_d58d19568c89869.webp 768w,/2017/07/cpu-or-gpu/dl-cpu-gpu-11_hu_c078d8bd94df56aa.webp 1024w,/2017/07/cpu-or-gpu/dl-cpu-gpu-11.png 1200w" src="dl-cpu-gpu-11.png"/> 
</figure>

<figure>

    <img loading="lazy" srcset="/2017/07/cpu-or-gpu/dl-cpu-gpu-12_hu_44c1d2cc10581f1a.webp 320w,/2017/07/cpu-or-gpu/dl-cpu-gpu-12_hu_27c08aabe3a3cacd.webp 768w,/2017/07/cpu-or-gpu/dl-cpu-gpu-12_hu_d41db5a45ef62daf.webp 1024w,/2017/07/cpu-or-gpu/dl-cpu-gpu-12.png 1200w" src="dl-cpu-gpu-12.png"/> 
</figure>

<h2 id="conclusion">Conclusion</h2>
<p>As it turns out, using 64 vCPUs is <em>bad</em> for deep learning as current software/hardware architectures can&rsquo;t fully utilize all of them, and it often results in the exact same performance (or <em>worse</em>) than with 32 vCPUs. In terms balancing both training speed and cost, training models with <strong>16 vCPUs + compiled TensorFlow</strong> seems like the winner. The 30%-40% speed boost of the compiled TensorFlow library was an unexpected surprise, and I&rsquo;m shocked Google doesn&rsquo;t offer a precompiled version of TensorFlow with these CPU speedups since the gains are nontrivial.</p>
<p>It&rsquo;s worth nothing that the cost advantages shown here are <em>only</em> possible with preemptible instances; regular high-CPU instances on Google Compute Engine are about 5x as expensive, and as a result eliminate the cost benefits completely. Hooray for economies of scale!</p>
<p>A major implicit assumption with the cloud CPU training approach is that you don&rsquo;t need a trained model ASAP. In professional use cases, time may be too valuable to waste, but in personal use cases where someone can just leave a model training overnight, it&rsquo;s a very, very good and cost-effective option, and one that I&rsquo;ll now utilize.</p>
<hr>
<p><em>All scripts for running the benchmark are available in <a href="https://github.com/minimaxir/deep-learning-cpu-gpu-benchmark">this GitHub repo</a>. You can view the R/ggplot2 code used to process the logs and create the visualizations in <a href="http://minimaxir.com/notebooks/deep-learning-cpu-gpu/">this R Notebook</a>.</em></p>
]]></content:encoded>
    </item>
    <item>
      <title>Video Games and Charity: Analyzing Awesome Games Done Quick 2016 Donations</title>
      <link>https://minimaxir.com/2016/01/agdq-2016/</link>
      <pubDate>Mon, 11 Jan 2016 08:00:00 -0700</pubDate>
      <guid>https://minimaxir.com/2016/01/agdq-2016/</guid>
      <description>Were frames killed? Were animals saved?</description>
      <content:encoded><![CDATA[<p><a href="https://gamesdonequick.com">Awesome Games Done Quick</a>, and its sister event Summer Games Done Quick, are a fundraising events that livestreams video game speedruns <a href="http://www.twitch.tv/gamesdonequick/profile">live on Twitch</a> for charity. Beginning in January 2011, before Twitch was launched out from Justin.tv, <a href="https://en.wikipedia.org/wiki/Awesome_Games_Done_Quick_and_Summer_Games_Done_Quick#List_of_marathons">AGDQ was very small</a> and only raised $52,519.83 for the <a href="http://preventcancer.org">Prevent Cancer Foundation</a>; now, in 2016, from January 3rd to January 10th, AGDQ <a href="https://gamesdonequick.com/tracker/index/agdq2016">successfully raised</a> about $1.2 <em>million</em> for the charity.</p>
<p>A <a href="https://en.wikipedia.org/wiki/Speedrun">speedrun</a>, as the name suggests, is the process of completing a video game as fast as possible, optionally with self-imposed challenges to make things more interesting. Speedruns can emphasize extreme player skill and/or clever glitch abuse. And unexpected mistakes which make the results hilarious.</p>
<p>One of the first runs of AGDQ 2016, <a href="https://www.youtube.com/watch?v=jLlian3g7Gg">Super Monkey Ball</a>, demonstrates all of these. (run starts at 5:57)</p>
<div style="position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden;">
      <iframe allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share; fullscreen" loading="eager" referrerpolicy="strict-origin-when-cross-origin" src="https://www.youtube-nocookie.com/embed/jLlian3g7Gg?autoplay=0&amp;controls=1&amp;end=0&amp;loop=0&amp;mute=0&amp;start=0" style="position: absolute; top: 0; left: 0; width: 100%; height: 100%; border:0;" title="YouTube video"></iframe>
    </div>

<p>AGDQ 2016 also has fun with the concept of speedrunning. One of the best events of AGDQ 2016 was a blind speedrun of user-created <a href="https://www.youtube.com/watch?v=8qC584MWXO4">Super Mario Maker</a> levels from top designers, in which hilarity ensued. (run starts at 27:41)</p>
<div style="position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden;">
      <iframe allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share; fullscreen" loading="eager" referrerpolicy="strict-origin-when-cross-origin" src="https://www.youtube-nocookie.com/embed/8qC584MWXO4?autoplay=0&amp;controls=1&amp;end=0&amp;loop=0&amp;mute=0&amp;start=0" style="position: absolute; top: 0; left: 0; width: 100%; height: 100%; border:0;" title="YouTube video"></iframe>
    </div>

<p>It might be interesting to know <em>which</em> video games lead to the achievement of over $1M donated to charity and the nature of the donations in general.</p>
<h2 id="gaming-data">Gaming Data</h2>
<p>With a few quick scripts on Kimono to scrape data from the <a href="https://gamesdonequick.com/tracker/donations/agdq2016">AGDQ 2016 donation page</a> (+ a <em>lot</em> of postprocessing in R!), I obtained a dataset of all 30,528 donations, their donors, when they donated, during what speedrun they donated, and <em>why</em> they donated. (<a href="https://docs.google.com/spreadsheets/d/1yyfkS0jvRK1cWrQesYiBn1TMGC93lo1MqahcU3XeGIU/edit?usp=sharing">Google Sheets link</a> for all the data)</p>
<p>Here are the cumulative donations during AGDQ, color coded by day:</p>
<figure>

    <img loading="lazy" srcset="/2016/01/agdq-2016/agdq-1_hu_66974e3510a0c413.webp 320w,/2016/01/agdq-2016/agdq-1_hu_468164133f4b470c.webp 768w,/2016/01/agdq-2016/agdq-1_hu_7aadedea2b879a28.webp 1024w,/2016/01/agdq-2016/agdq-1.png 1200w" src="agdq-1.png"/> 
</figure>

<p>Cumulative donations were strong the entire run. On the second-to-last day, the donations rallied and increased exponentially, clearing $1M handily on the last day.</p>
<p>The donation amount minimum is $5, but the average is significantly higher at $39.62. What is the distribution of donations?</p>
<p>Here is a distribution of donations from $5 to $100 (for ease of visualization/interpretation), which account for 97% of all donations:</p>
<figure>

    <img loading="lazy" srcset="/2016/01/agdq-2016/agdq-2_hu_9c2b7ebdf2a868b5.webp 320w,/2016/01/agdq-2016/agdq-2_hu_f5e3c025fbd19725.webp 768w,/2016/01/agdq-2016/agdq-2_hu_1de3d44c61f8e931.webp 1024w,/2016/01/agdq-2016/agdq-2.png 1200w" src="agdq-2.png"/> 
</figure>

<p>The median donation amount is $20. What&rsquo;s interesting is that donations occur at clear break points: not only are there many donations at multiple of $10, but there are many donations at $25 and $75 as well. The $50 and $75 points also potentially benefited for being the threshold for entry into a <a href="https://gamesdonequick.com/tracker/prizes/agdq2016">grand prize raffle</a>. I&rsquo;ll note off-chart that there is a spike in $1,000 donations, the threshold for the audience-clapping in celebration and the <a href="https://gamesdonequick.com/tracker/donation/212588">top single donation</a> was made by an AGDQ sponsor, <a href="https://www.theyetee.com/">The Yetee</a>, at $18,225. The <a href="https://gamesdonequick.com/tracker/donation/209613">top donation by a non-sponsor</a> is from Minecraft creator Notch at $8,000, which he <a href="https://gamesdonequick.com/tracker/donation/234071">did twice</a>.</p>
<p>Which games are the most popular and generated the most amount of money for the Prevent Cancer Foundation?</p>
<figure>

    <img loading="lazy" srcset="/2016/01/agdq-2016/agdq-4_hu_491202455ca05cc1.webp 320w,/2016/01/agdq-2016/agdq-4_hu_8257ec8fbcd454ec.webp 768w,/2016/01/agdq-2016/agdq-4_hu_bdf86800d6fb1d8.webp 1024w,/2016/01/agdq-2016/agdq-4.png 1200w" src="agdq-4.png"/> 
</figure>

<p>Unsurprisingly, Nintendo games are the most popular due to the nostalgia factor. In fairness, the top runs on this chart occur during the last two days of AGDQ 2016, which as mentioned previously may have been affected by a rally, so we cannot assert causality. The appearance of <a href="http://yachtclubgames.com/shovel-knight/">Shovel Knight</a> and <a href="https://en.wikipedia.org/wiki/Bloodborne">Bloodborne</a> as leading donation games, both relatively recently released, shows that speedrunning has more appeal than just retro games.</p>
<p>A popular technique in charity drives is donation incentives, which help bolster the number of donations total. The <a href="https://gamesdonequick.com/tracker/bids/agdq2016">AGDQ bid incentives</a> can include bonus game segments, or certain game decisions, such as what name to give to a main character.</p>
<p>Any donation can optionally be assigned as a donation toward an incentive. Which run received the most money toward incentives?</p>
<figure>

    <img loading="lazy" srcset="/2016/01/agdq-2016/agdq-5_hu_a4d76f42b8ded894.webp 320w,/2016/01/agdq-2016/agdq-5_hu_7cb82c39a228c7b6.webp 768w,/2016/01/agdq-2016/agdq-5_hu_4ad743cd80b22cd2.webp 1024w,/2016/01/agdq-2016/agdq-5.png 1200w" src="agdq-5.png"/> 
</figure>

<p>Super Metroid donations incentives accounted for nearly <em>1/4th</em> of all the money raised at AGDQ. Final Fantasy IV accounted for a large amount as well, however, in both cases, the rally effect may apply. {% comment %}The Super Mario Maker Custom Level Blind Race which I showcased above benefits from donation incentives with bonus races, and the results show that the incentive was worthwhile.{% endcomment %}</p>
<p>Speaking of the Super Metroid donation incentives, it should be noted that this particular incentive is one of the most culturally-important incentives in the show. Super Metroid has an optional objective to Save The Animals from planetary destruction, but this costs time, and time is important for a speedrun. Or the speedruner can Kill The Animals through inaction for efficiency, &ldquo;Saving the Frames.&rdquo;</p>
<p>What is the split of these incentive choices?</p>
<figure>

    <img loading="lazy" srcset="/2016/01/agdq-2016/agdq-6_hu_926af7a059c72381.webp 320w,/2016/01/agdq-2016/agdq-6_hu_c8db0a5dfb93768c.webp 768w,/2016/01/agdq-2016/agdq-6_hu_d9dfc97884479bde.webp 1024w,/2016/01/agdq-2016/agdq-6.png 1200w" src="agdq-6.png"/> 
</figure>

<p>Yes, the animals were killed (specifically, they were <strong>REKT</strong>, in the words of a last-minute donator). Both bonus games and vanity naming were popular, but nothing compared to the Save the Animals / Kill the Animals bid war.</p>
<p>Lastly, people can leave comments with donations, and these comments are usually read on-stream when possible, as you&rsquo;ve likely noticed if you&rsquo;re watched the videos above.</p>
<p>Here&rsquo;s a fun, nonscientific word cloud of those comments:</p>
<figure>

    <img loading="lazy" srcset="/2016/01/agdq-2016/agdq-7_hu_6b77b0f9544e6225.webp 320w,/2016/01/agdq-2016/agdq-7_hu_c496b8c2ac3dd629.webp 768w,/2016/01/agdq-2016/agdq-7.png 1000w" src="agdq-7.png"/> 
</figure>

<p>Lots of positivity, aside from the whole &ldquo;Kill the Animals&rdquo; thing. After all, the event is all about preventing cancer.</p>
<p>While this post isn&rsquo;t an academic analysis, it&rsquo;s neat to see to see what kinds of things drive donation to charity. This model of livestreaming and charitable applications is very successful, and important given the renewed attention toward livestreaming with the rise of Twitch to mainstream attention, alongside the rise of personal streaming with apps like Periscope. Donation incentives are a <em>very</em> successful technique for facilitating donations.</p>
<p>It will be interesting to see if Twitch and events like AGDQ can leverage charitable livestreaming, or if another startup/organization beats them first. Bidding-Wars-for-Deciding-the-Fate-of-Fictional-Animals-as-a-Service has a nice ring to it.</p>
<hr>
<p><em>You can access a Jupyter notebook with the data processing and chart processing code <a href="https://github.com/minimaxir/agdq-2016">in this GitHub repository</a>. If you use the processed donation data or visualization designs contained within this article, it would be greatly appreciated if proper attribution is given back to this article and/or myself. Thanks! :)</em></p>
<p><em>Note for donation incentive statistics: the user can theoretically split their donation among multiple incentives; unfortunately I assumed at time of scrape that all the money could only go toward one bid. All donations I investigated from the source data were toward a single incentive except two donations from The Yetee which I fixed manually. If there are any data discrepancies, that is the likely cause.</em></p>
]]></content:encoded>
    </item>
  </channel>
</rss>
