<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>Cloud Computing on Max Woolf&#39;s Blog</title>
    <link>https://minimaxir.com/category/cloud-computing/</link>
    <description>Recent content in Cloud Computing on Max Woolf&#39;s Blog</description>
    
    <generator>Hugo</generator>
    <language>en</language>
    <copyright>Copyright Max Woolf © 2026</copyright>
    <lastBuildDate>Mon, 19 Nov 2018 09:00:00 -0700</lastBuildDate>
    <atom:link href="https://minimaxir.com/category/cloud-computing/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Run Any Scheduled Task/Cron Super-Cheap on Google Cloud Platform</title>
      <link>https://minimaxir.com/2018/11/cheap-cron/</link>
      <pubDate>Mon, 19 Nov 2018 09:00:00 -0700</pubDate>
      <guid>https://minimaxir.com/2018/11/cheap-cron/</guid>
      <description>Thanks to a few new synergies within GCP products, it&amp;rsquo;s possible to get the cost of running a scheduled task down to less than a dollar a month.</description>
      <content:encoded><![CDATA[<p>Let&rsquo;s say you want to make a <a href="https://twitter.com">Twitter</a> bot to tweet out a custom message every few hours or so, and the free-tier VMs offered by cloud services with fractional virtual CPUs and little RAM aren&rsquo;t sufficient. How do you host the bot? Many suggest you get a <a href="https://www.digitalocean.com">Digital Ocean</a> VM for <a href="https://www.digitalocean.com/pricing/">$5/mo</a>, which is not a bad price. But what if you want to run <em>multiple</em> bots? How do you easily coordinate multiple scheduled tasks?</p>
<p>In my case, I maintain three bots: a bot which <a href="https://twitter.com/MTGIFening">tweets GIFs</a> superimposed onto Magic: The Gathering cards, a bot which <a href="https://twitter.com/hackernews_nn">tweets AI-generated Hacker News submission titles</a>, and a bot which makes <a href="https://www.reddit.com/r/subredditnn">AI-generated Reddit submissions</a>. I found a clever solution to the multiple-bots problem: leveraging <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/cronjobs">CronJobs</a> with <a href="https://cloud.google.com/kubernetes-engine/">Google Kubernetes Engine</a> + a single worker node. Each bot has its own CronJob which tells when GKE should schedule a Job for each task, and then the cluster executes the Jobs whenever compute capacity is available (i.e. no resource hogging/race conditions), and can ensure completion by restarting the task if it fails.</p>
<figure>

    <img loading="lazy" srcset="/2018/11/cheap-cron/kubecron_hu_257738fd32526fcd.webp 320w,/2018/11/cheap-cron/kubecron_hu_60eaa4e7513fc57f.webp 768w,/2018/11/cheap-cron/kubecron_hu_51e1e5c783f05f18.webp 1024w,/2018/11/cheap-cron/kubecron.png 1132w" src="kubecron.png"/> 
</figure>

<p>The cost of running a cluster in GKE is just the cost of the compute: using a preemptible n1-standard-1 (1 vCPU/3.75 GB RAM) worker node VM, the <a href="https://cloud.google.com/compute/pricing">cost</a> is about <strong>$7.30/mo</strong>, a bit more than the Digital Ocean server but can theoretically handle an unlimited number of scheduled tasks. The problem is the worker node needs to be up 24/7 even though the bots run sporadically.</p>
<p>But thanks to a few new synergies within GCP products, it&rsquo;s possible to get the cost of running a scheduled task down to <em>less than a dollar a month</em>.</p>
<h2 id="gcp-shenanigans">GCP Shenanigans</h2>
<p><a href="https://cloud.google.com/blog/products/application-development/announcing-cloud-scheduler-a-modern-managed-cron-service-for-automated-batch-jobs">A couple weeks ago</a>, Google released <a href="https://cloud.google.com/scheduler/">Cloud Scheduler</a>, which is a managed cron service that can perform tasks for other Google services. With that launch, Google also released a tutorial titled <a href="https://cloud.google.com/scheduler/docs/scheduling-instances-with-cloud-scheduler">Scheduling Instances with Cloud Scheduler</a>, demonstrating how you can programmatically start and stop instances using Cloud Scheduler in conjunction with <a href="https://cloud.google.com/functions/">Cloud Functions</a>. The demo use case is to schedule VMs during business hours, which gave me an idea; could this approach be used to boot up a VM, run a script, and then shut it down to minimize uptime?</p>
<p>I followed the tutorial instructions, which contains code to create a Cloud Function which boots up a specified VM, code for a Cloud Function to shut down a specified VM, and how to create cron jobs in Cloud Scheduler to invoke those two Functions at a specified time. For my scheduled tasks, I only need the instance up for a couple minutes: for example we can use a Cloud Scheduler job to start an instance every 4 hours at X:00 with the cron <code>0 */4 * * *</code>, and shut it down 2 minutes later at X:02 with the cron <code>2 */4 * * *</code>.</p>
<p>The next step is configuring a Google Compute Engine VM to run the scheduled task on boot. There are two ways to go about it: one is to use the <code>startup-script</code> field when configuring a VM, which specifies a command to run on boot and gets the job done. Another approach (which I use) is to package the scheduled task as a <a href="https://www.docker.com/resources/what-container">Docker Container</a>, and use a container-optimized OS which simply runs a specified container upon boot (although you should set restart to <code>On Failure</code> and give <code>Privileged Access</code> to the container). Additionally, the VM can be configured as a preemptible instance for massive cost savings, as the &ldquo;shut-down-at-anytime&rdquo; constraint is irrelevant for this use case!</p>
<figure>

    <img loading="lazy" srcset="/2018/11/cheap-cron/vm_hu_60ea09aeebe0f061.webp 320w,/2018/11/cheap-cron/vm_hu_5bae417d9bdcd8ce.webp 768w,/2018/11/cheap-cron/vm_hu_1d7bf6105857ee53.webp 1024w,/2018/11/cheap-cron/vm.png 1066w" src="vm.png"/> 
</figure>

<p>After the VMs are created, I created the start/stop tasks targeting those VMs as noted in the tutorial.</p>
<figure>

    <img loading="lazy" srcset="/2018/11/cheap-cron/scheduler_hu_f3821e91f9e5b9b.webp 320w,/2018/11/cheap-cron/scheduler_hu_b50652bfa915e2ea.webp 768w,/2018/11/cheap-cron/scheduler_hu_4245bc81e840ba03.webp 1024w,/2018/11/cheap-cron/scheduler.png 1446w" src="scheduler.png"/> 
</figure>

<p>I can verify that this workflow indeed works for all my bots, and the crons have been running successfully at the specified time, for only a couple minutes!</p>
<figure>

    <img loading="lazy" srcset="/2018/11/cheap-cron/working_hu_ff6618b7926cdd5b.webp 320w,/2018/11/cheap-cron/working_hu_b32d4c5569f2af3a.webp 768w,/2018/11/cheap-cron/working_hu_4917f0b4d5f52f8a.webp 1024w,/2018/11/cheap-cron/working.png 1790w" src="working.png"/> 
</figure>

<h2 id="crunching-the-numbers">Crunching the Numbers</h2>
<p>This approach incorporates many different Google products. Is it <em>actually</em> cheaper than just maintaining a simple $5/month server? Let&rsquo;s calculate the monthly cost of all these services.</p>
<p>Assuming that we run a scheduled task every 4 hours, and the server is up for 2 minutes each time (i.e. 12 minutes of uptime a day):</p>
<ul>
<li><strong>Compute Engine</strong>: A preemptible n1-standard-1 is <a href="https://cloud.google.com/compute/pricing">$0.01 an hour</a>. <code>$0.01 / 60 * 12 * 30 = $0.06</code></li>
<li><strong>VM Persistent Disk</strong>: Each GB of storage for a VM costs <a href="https://cloud.google.com/compute/pricing#disk">$0.04/month</a>, and the minimum storage size is 10GB. <code>$0.04 * 10 = $0.40</code></li>
<li><strong>Cloud Scheduler</strong>: Each rule is <a href="https://cloud.google.com/scheduler/pricing">$0.10/month</a>, and there are both a start and a stop rule. <code>$0.10 * 2 = $0.20</code></li>
<li><strong>Cloud Functions</strong>: It takes about 60 seconds total to turn on and off a VM, and with the default 256MB provision, during which it costs <a href="https://cloud.google.com/functions/pricing">$.000000463/100ms</a>. <code>0.000000463 * 10 * 60 * 12 * 30 = $0.10</code></li>
</ul>
<p>$0.06 + $0.40 + $0.20 + $0.10 = <strong>$0.76/month to run the scheduled task</strong>! That&rsquo;s not even counting the free tier bonuses if you just want to create one scheduled task; in that case, the only price you pay is the $0.06/mo for the VM. And even in the case where you run the task every hour (like in the images above), the cost is $1.24/month; still not bad.</p>
<p>It&rsquo;s worth noting that these pricing economics wouldn&rsquo;t have worked years ago. Back then <a href="https://aws.amazon.com">Amazon Web Services</a>, the leader in web services, charged for a minimum of 1 hour every time a VM was booted. Google Compute Engine innovated by only requiring a minimum of 10 minutes, which is much better but still would have had unnecessary overhead (in this example, it would increase compute monthly costs by $0.24/31%). <a href="https://cloud.google.com/blog/products/gcp/extending-per-second-billing-in-google">As of September 2017</a>, Google Compute Engine charges a minimum of <strong>1 minute</strong>, which makes this workflow possible and cheap (AWS made the same change <a href="https://aws.amazon.com/blogs/aws/new-per-second-billing-for-ec2-instances-and-ebs-volumes/">a week earlier</a>).</p>
<p>It&rsquo;s also possible that similar workflows exist for AWS and <a href="https://azure.microsoft.com/en-us/">Azure Cloud</a>, although I&rsquo;m less familiar with those platforms (and it may not necessarily be better/cheaper). Sure, if you have a very simple task to practice making bots in the cloud, the free tier of any cloud service might suffice (where you run the server all the time, and schedule the cron on the server itself). If you&rsquo;re planning many scheduled tasks, then a centralized approach like my initial Kubernetes implementation might actually be more cost effective. But if you&rsquo;re somewhere <em>in between</em>, then giving each scheduled task its own VM makes more sense for both ease of use and cost-effectiveness. And there&rsquo;s still many further optimizations to be done too (for example, by allowing the script in the VM to ping a HTTP Cloud Function endpoint and shut <em>itself</em> off when complete instead of using a scheduled cron rule).</p>
]]></content:encoded>
    </item>
    <item>
      <title>Benchmarking Modern GPUs for Maximum Cloud Cost Efficiency in Deep Learning</title>
      <link>https://minimaxir.com/2017/11/benchmark-gpus/</link>
      <pubDate>Tue, 28 Nov 2017 08:30:00 -0700</pubDate>
      <guid>https://minimaxir.com/2017/11/benchmark-gpus/</guid>
      <description>A 36% price cut to GPU instances, in addition to the potential new benefits offered by software and GPU updates, however, might be enough to tip the cost-efficiency scales back in favor of GPUs.</description>
      <content:encoded><![CDATA[<p>A few months ago, I <a href="http://minimaxir.com/2017/06/keras-cntk/">performed benchmarks</a> of deep learning frameworks in the cloud, with a <a href="http://minimaxir.com/2017/07/cpu-or-gpu/">followup</a> focusing on the cost difference between using GPUs and CPUs. And just a few months later, the landscape has changed, with significant updates to the low-level <a href="https://developer.nvidia.com/cudnn">NVIDIA cuDNN</a> library which powers the raw learning on the GPU, the <a href="https://www.tensorflow.org">TensorFlow</a> and <a href="https://github.com/Microsoft/CNTK">CNTK</a> deep learning frameworks, and the higher-level <a href="https://github.com/fchollet/keras">Keras</a> framework which uses TensorFlow/CNTK as backends for easy deep learning model training.</p>
<p>As a bonus to the framework updates, Google <a href="https://cloudplatform.googleblog.com/2017/09/introducing-faster-GPUs-for-Google-Compute-Engine.html">recently released</a> the newest generation of NVIDIA cloud GPUs, the Pascal-based P100, onto <a href="https://cloud.google.com/compute/">Google Compute Engine</a> which touts an up-to-10x performance increase to the current K80 GPUs used in cloud computing. As a bonus bonus, Google recently <a href="https://cloudplatform.googleblog.com/2017/11/new-lower-prices-for-GPUs-and-preemptible-Local-SSDs.html">cut the prices</a> of both K80 and P100 GPU instances by up to 36%.</p>
<p>The results of my earlier benchmarks favored <a href="https://cloud.google.com/preemptible-vms/">preemptible</a> instances with many CPUs as the most cost efficient option (where a preemptable instance can only last for up to 24 hours and could end prematurely). A 36% price cut to GPU instances, in addition to the potential new benefits offered by software and GPU updates, however, might be enough to tip the cost-efficiency scales back in favor of GPUs. It&rsquo;s a good idea to rerun the experiment with updated VMs and see what happens.</p>
<h2 id="benchmark-setup">Benchmark Setup</h2>
<p>As with the original benchmark, I set up a <a href="https://github.com/minimaxir/keras-cntk-docker">Docker container</a> containing the deep learning frameworks (based on cuDNN 6, the latest version of cuDNN natively supported by the frameworks) that can be used to train each model independently. The <a href="https://github.com/minimaxir/keras-cntk-benchmark/tree/master/v2/test_files">Keras benchmark scripts</a> run on the containers are based off of <em>real world</em> use cases of deep learning.</p>
<p>The 6 hardware/software configurations and Google Compute Engine <a href="https://cloud.google.com/compute/pricing">pricings</a> for the tests are:</p>
<ul>
<li>A K80 GPU (attached to a <code>n1-standard-1</code> instance), tested with both TensorFlow (1.4) and CNTK (2.2): <strong>$0.4975 / hour</strong>.</li>
<li>A P100 GPU (attached to a <code>n1-standard-1</code> instance), tested with both TensorFlow and CNTK: <strong>$1.5075 / hour</strong>.</li>
<li>A preemptable <code>n1-highcpu-32</code> instance, with 32 vCPUs based on the Intel Skylake architecture, tested with TensorFlow only: <strong>$0.2400 / hour</strong></li>
<li>A preemptable <code>n1-highcpu-16</code> instance, with 16 vCPUs based on the Intel Skylake architecture, tested with TensorFlow only: <strong>$0.1200 / hour</strong></li>
</ul>
<p>A single K80 GPU uses 1/2 a GPU board while a single P100 uses a full GPU board, which in an ideal world would suggest that the P100 is twice as fast at the K80 at minimum. But even so, the P100 configuration is about 3 times as expensive, so even if a model is trained in half the time, it may not necessarily be cheaper with the P100.</p>
<p>Also, the CPU tests use TensorFlow <em>as installed via the recommended method</em> through pip, since compiling the TensorFlow binary from scratch to take advantage of CPU instructions as <a href="http://minimaxir.com/2017/07/cpu-or-gpu/">with my previous test</a> is not a pragmatic workflow for casual use.</p>
<h2 id="benchmark-results">Benchmark Results</h2>
<p>When a fresh-out-of-a-AI-MOOC engineer wants to experiment with deep learning in the cloud, typically they use a K80 + TensorFlow setup, so we&rsquo;ll use that as the <em>base configuration</em>.</p>
<p>For each model architecture and software/hardware configuration, I calculate the <strong>total training time relative to the base configuration instance training</strong> for running the model training for the provided test script. In all cases, the P100 GPU <em>should</em> perform better than the K80, and 32 vCPUs <em>should</em> train faster than 16 vCPUs. The question is how <em>much</em> faster?</p>
<p>Let&rsquo;s start using the <a href="http://yann.lecun.com/exdb/mnist/">MNIST dataset</a> of handwritten digits plus the common multilayer perceptron (MLP) architecture, with dense fully-connected layers. Lower training time is better.</p>
<figure>

    <img loading="lazy" srcset="/2017/11/benchmark-gpus/dl-cpu-gpu-5_hu_df63751b48270991.webp 320w,/2017/11/benchmark-gpus/dl-cpu-gpu-5_hu_33351b8d5d2916d3.webp 768w,/2017/11/benchmark-gpus/dl-cpu-gpu-5_hu_773ee4a74d2ce535.webp 1024w,/2017/11/benchmark-gpus/dl-cpu-gpu-5.png 1200w" src="dl-cpu-gpu-5.png"/> 
</figure>

<p>For this task, CNTK appears to be more effective than TensorFlow. Indeed, the P100 is faster than the K80 for the corresponding framework, although it&rsquo;s not a dramatic difference. However, since the task is simple, the CPU performance is close to that of the GPU, which implies that the GPU is not as cost effective for a simple architecture.</p>
<p>For each model architecture and configuration, I calculate a <strong>normalized training cost relative to the cost of the base configuration training</strong>. Because GCE instance costs are prorated, we can simply calculate experiment cost by multiplying the total number of seconds the experiment runs by the cost of the instance (per second).</p>
<figure>

    <img loading="lazy" srcset="/2017/11/benchmark-gpus/dl-cpu-gpu-6_hu_8092aa4efa0c4355.webp 320w,/2017/11/benchmark-gpus/dl-cpu-gpu-6_hu_6ec85d77120003f7.webp 768w,/2017/11/benchmark-gpus/dl-cpu-gpu-6_hu_3fa9ff93fed554d5.webp 1024w,/2017/11/benchmark-gpus/dl-cpu-gpu-6.png 1200w" src="dl-cpu-gpu-6.png"/> 
</figure>

<p>Unsurprisingly, CPUs are more cost effective. However, the P100 is more cost <em>ineffective</em> for this task than the K80.</p>
<p>Now, let&rsquo;s look at the same dataset with a convolutional neural network (CNN) approach for digit classification. Since CNNs are typically used for computer vision tasks, new graphic card architectures are optimized for CNN workflows, so it will be interesting to see how the P100 performs compared to the K80:</p>
<figure>

    <img loading="lazy" srcset="/2017/11/benchmark-gpus/dl-cpu-gpu-7_hu_f8361510000c69ef.webp 320w,/2017/11/benchmark-gpus/dl-cpu-gpu-7_hu_a5e4bb39cb0f4851.webp 768w,/2017/11/benchmark-gpus/dl-cpu-gpu-7_hu_13b371e4d8afa6c9.webp 1024w,/2017/11/benchmark-gpus/dl-cpu-gpu-7.png 1200w" src="dl-cpu-gpu-7.png"/> 
</figure>

<figure>

    <img loading="lazy" srcset="/2017/11/benchmark-gpus/dl-cpu-gpu-8_hu_f4a994fcdbd47c8f.webp 320w,/2017/11/benchmark-gpus/dl-cpu-gpu-8_hu_94b3b6c80d09cc47.webp 768w,/2017/11/benchmark-gpus/dl-cpu-gpu-8_hu_ca2831240a30c8c.webp 1024w,/2017/11/benchmark-gpus/dl-cpu-gpu-8.png 1200w" src="dl-cpu-gpu-8.png"/> 
</figure>

<p>Indeed, the P100 is twice as fast and the K80, but due to the huge cost premium, it&rsquo;s not cost effective for this simple task. However, CPUs do not perform well on this task either, so notably the base configuration is the best configuration.</p>
<p>Let&rsquo;s go deeper with CNNs and look at the <a href="https://www.cs.toronto.edu/%7Ekriz/cifar.html">CIFAR-10</a> image classification dataset, and a model which utilizes a deep covnet + a multilayer perceptron and ideal for image classification (similar to the <a href="https://gist.github.com/baraldilorenzo/07d7802847aaad0a35d3">VGG-16</a> architecture).</p>
<figure>

    <img loading="lazy" srcset="/2017/11/benchmark-gpus/dl-cpu-gpu-9_hu_3e89a9d69d2114d8.webp 320w,/2017/11/benchmark-gpus/dl-cpu-gpu-9_hu_188420deeffa2cca.webp 768w,/2017/11/benchmark-gpus/dl-cpu-gpu-9_hu_2994e1dc8b68f244.webp 1024w,/2017/11/benchmark-gpus/dl-cpu-gpu-9.png 1200w" src="dl-cpu-gpu-9.png"/> 
</figure>

<figure>

    <img loading="lazy" srcset="/2017/11/benchmark-gpus/dl-cpu-gpu-10_hu_4c8240dc9addd1a4.webp 320w,/2017/11/benchmark-gpus/dl-cpu-gpu-10_hu_e38edfb433bf8413.webp 768w,/2017/11/benchmark-gpus/dl-cpu-gpu-10_hu_a879b46166fddc6d.webp 1024w,/2017/11/benchmark-gpus/dl-cpu-gpu-10.png 1200w" src="dl-cpu-gpu-10.png"/> 
</figure>

<p>Similar results to that of a normal MLP. Nothing fancy.</p>
<p>The Bidirectional long-short-term memory (LSTM) architecture is great for working with text data like IMDb reviews. When I did <a href="http://minimaxir.com/2017/06/keras-cntk/">my first benchmark article</a>, I noticed that CNTK performed significantly better than TensorFlow, as <a href="https://news.ycombinator.com/item?id=14538086">commenters on Hacker News</a> noted that TensorFlow uses an inefficient implementation of the LSTM on the GPU.</p>
<figure>

    <img loading="lazy" srcset="/2017/11/benchmark-gpus/cntk-old_hu_b86c227c88de2e7d.webp 320w,/2017/11/benchmark-gpus/cntk-old_hu_3901dc880777da18.webp 768w,/2017/11/benchmark-gpus/cntk-old_hu_8d49b907914bb06b.webp 1024w,/2017/11/benchmark-gpus/cntk-old.png 1620w" src="cntk-old.png"/> 
</figure>

<p>However, with Keras&rsquo;s <a href="https://keras.io/layers/recurrent/#cudnnlstm">new CuDNNRNN layers</a> which leverage cuDNN, this inefficiency may be fixed, so for the K80/P100 TensorFlow GPU configs, I use a CuDNNLSTM layer instead of a normal LSTM layer. So let&rsquo;s take another look:</p>
<figure>

    <img loading="lazy" srcset="/2017/11/benchmark-gpus/dl-cpu-gpu-1_hu_f633549e7615557a.webp 320w,/2017/11/benchmark-gpus/dl-cpu-gpu-1_hu_c8eb1a82936955a7.webp 768w,/2017/11/benchmark-gpus/dl-cpu-gpu-1_hu_734746132ba497c3.webp 1024w,/2017/11/benchmark-gpus/dl-cpu-gpu-1.png 1200w" src="dl-cpu-gpu-1.png"/> 
</figure>

<figure>

    <img loading="lazy" srcset="/2017/11/benchmark-gpus/dl-cpu-gpu-2_hu_6f0e2078d0fbe4a8.webp 320w,/2017/11/benchmark-gpus/dl-cpu-gpu-2_hu_f5299cfcd4184de5.webp 768w,/2017/11/benchmark-gpus/dl-cpu-gpu-2_hu_9c9b4dbee5321cd.webp 1024w,/2017/11/benchmark-gpus/dl-cpu-gpu-2.png 1200w" src="dl-cpu-gpu-2.png"/> 
</figure>

<p><em>WOAH.</em> TensorFlow is now more than <em>three times as fast</em> than CNTK! (And compared against my previous benchmark, TensorFlow on the K80 w/ the CuDNNLSTM is about <em>7x as fast</em> as it once was!) Even the CPU-only versions of TensorFlow are faster than CNTK on the GPU now, which implies significant improvements in the ecosystem outside of the CuDNNLSTM layer itself. (And as a result, CPUs are still more cost efficient)</p>
<p>Lastly, LSTM text generation of <a href="https://en.wikipedia.org/wiki/Friedrich_Nietzsche">Nietzsche&rsquo;s</a> <a href="https://s3.amazonaws.com/text-datasets/nietzsche.txt">writings</a> follows similar patterns to the other architectures, but without the drastic hit to the GPU.</p>
<figure>

    <img loading="lazy" srcset="/2017/11/benchmark-gpus/dl-cpu-gpu-11_hu_e64be99549e22a4a.webp 320w,/2017/11/benchmark-gpus/dl-cpu-gpu-11_hu_c9e45139e2d4d36b.webp 768w,/2017/11/benchmark-gpus/dl-cpu-gpu-11_hu_73f05d523cc746fa.webp 1024w,/2017/11/benchmark-gpus/dl-cpu-gpu-11.png 1200w" src="dl-cpu-gpu-11.png"/> 
</figure>

<figure>

    <img loading="lazy" srcset="/2017/11/benchmark-gpus/dl-cpu-gpu-12_hu_18c099feff0cab3f.webp 320w,/2017/11/benchmark-gpus/dl-cpu-gpu-12_hu_346cce6ac1dd882a.webp 768w,/2017/11/benchmark-gpus/dl-cpu-gpu-12_hu_784cadffdd30380.webp 1024w,/2017/11/benchmark-gpus/dl-cpu-gpu-12.png 1200w" src="dl-cpu-gpu-12.png"/> 
</figure>

<h2 id="conclusions">Conclusions</h2>
<p>The biggest surprise of these new benchmarks is that there is no configuration where the P100 is the most cost-effective option, even though the P100 is indeed faster than the K80 in all tests. Although per <a href="https://developer.nvidia.com/cudnn">the cuDNN website</a>, there is apparently only a 2x speed increase between the performance of the K80 and P100 using cuDNN 6, which is mostly consistent with the results of my benchmarks:</p>
<figure>

    <img loading="lazy" srcset="/2017/11/benchmark-gpus/cudnn_hu_354d8fa8ab3eff29.webp 320w,/2017/11/benchmark-gpus/cudnn_hu_bb346ea37595e154.webp 768w,/2017/11/benchmark-gpus/cudnn_hu_9b3f6e3ea7ba3a02.webp 1024w,/2017/11/benchmark-gpus/cudnn.png 1688w" src="cudnn.png"/> 
</figure>

<p>I did not include a multi-GPU configuration in the benchmark data visualizations above using Keras&rsquo;s new <code>multi_gpu_model</code> <a href="https://keras.io/utils/#multi_gpu_model">function</a> because in my testing, the multi-GPU training <em>was equal to or worse than a single GPU</em> in all tests.</p>
<p>Taking these together, it&rsquo;s possible that the overhead introduced by parallel, advanced architectures <em>eliminates the benefits</em> for &ldquo;normal&rdquo; deep learning workloads which do not fully saturate the GPU. Rarely do people talk about diminishing returns in GPU performance with deep learning.</p>
<p>In the future, I want to benchmark deep learning performance against more advanced deep learning use cases such as <a href="https://en.wikipedia.org/wiki/Reinforcement_learning">reinforcement learning</a> and deep CNNs like <a href="https://github.com/tensorflow/models/tree/master/research/inception">Inception</a>. But that doesn&rsquo;t mean these benchmarks are not relevant; as stated during the benchmark setup, the GPUs were tested against typical deep learning use cases, and now we see the tradeoffs that result.</p>
<p>In all, with the price cuts on GPU instances, cost-performance is often <em>on par</em> with preemptable CPU instances, which is an advantage if you want to train models faster and not risk the instance being killed unexpectedly. And there is still a lot of competition in this space: <a href="https://www.amazon.com">Amazon</a> offers a <code>p2.xlarge</code> <a href="https://aws.amazon.com/ec2/spot/">Spot Instance</a> with a K80 GPU for $0.15-$0.20 an hour, less than half of the corresponding Google Compute Engine K80 GPU instance, although with <a href="https://aws.amazon.com/ec2/spot/details/">a few bidding caveats</a> which I haven&rsquo;t fully explored yet. Competition will drive GPU prices down even further, and training deep learning models will become even easier.</p>
<p>And as the cuDNN chart above shows, things will get <em>very</em> interesting once Volta-based GPUs like the V100 are generally available and the deep learning frameworks support cuDNN 7 by default.</p>
<p><strong>UPDATE 12/17</strong>: <em>As pointed out by <a href="https://news.ycombinator.com/item?id=15941682">dantiberian on Hacker News</a>, Google Compute Engine now supports <a href="https://cloud.google.com/compute/docs/instances/preemptible#preemptible_with_gpu">preemptible GPUs</a>, which was apparently added after this post went live. Preemptable GPUs are exactly half the price of normal GPUs (for both K80s and P100s; $0.73/hr and $0.22/hr respectively), so they&rsquo;re about double the cost efficiency (when factoring in the cost of the base preemptable instance), which would put them squarely ahead of CPUs in all cases. (and since the CPU instances used here were also preemptable, it&rsquo;s apples-to-apples)</em></p>
<hr>
<p><em>All scripts for running the benchmark are available in <a href="https://github.com/minimaxir/keras-cntk-benchmark/tree/master/v2">this GitHub repo</a>. You can view the R/ggplot2 code used to process the logs and create the visualizations in <a href="http://minimaxir.com/notebooks/benchmark-gpus/">this R Notebook</a>.</em></p>
]]></content:encoded>
    </item>
  </channel>
</rss>
