Let's dive into the Azure compute options. If we look at this interface here, we have Compute instances, we have Compute clusters, and we also have Inference clusters. What's the difference? Well, the Compute cluster here is where you would spin up something to provision a Jupyter Notebook. We can go ahead and do that. If I go ahead and select "New" and the options of selecting the name, let's just say this is Jupyter-demo. You can see here that I can select between a CPU or a GPU depending on what problem I'm solving. I also can toggle between the different machine sizes. Additionally, I'm able to SSH into it if I need to. I'm going to go ahead and say create. This usually takes a few minutes to create. Now, let's also look at Compute clusters. If I look at Compute clusters, if I go through here, I can say New. Same thing. I can say Compute cluster would be demo-cluster-2. As well I can toggle between a GPU or a CPU. If I am going to be doing autoML, for example, and I'm going to be doing deep learning then a GPU could really make sense for that particular workload. This is also important to be aware of is the virtual machine priority. If I click on this, you can see that dedicated means it's always there, but a lot of times for experiments, I think the low priority is the way to go because it will actually lower the cost. As well, I can toggle between the sizes here, and then I can select the minimum and maximum number of nodes. Now, this is important to call out here is that if you don't want to be paying something at night, for example, when you don't have a job running, you should leave this at zero and then you can tell it to burst up to a certain number. Let's say I want to burst up to four, and then I can go through and create that. What's nice about this is that it will be idle until I need it and then when I provision it, it'll be ready to go. Now let's take a look at the third type. The third type is a Kubernetes scale inferencing cluster type. If I go through here and I create this, you can see that it uses Kubernetes to do inference or prediction at scale. When you're creating an inference cluster, it's very similar to the other types compute Instance, compute cluster. You type in a name, we'll call this kubeluster, and we'll select a region as well, Central US. Same thing, I can select different virtual machine types, we can leave a default number of nodes here, and I can go ahead and say create. Finally, the last type of compute instance would be a specialized cluster. Let's say you're using Databricks, for example, which is a managed Spark environment. I could a hook that up into this attached cluster. In general, there's Compute Instance, which is for Jupyter Notebooks. There's Compute Clusters, which is for bursting and doing training, and then there's Inference Clusters, which is an optimal environment for doing predictions at scale, not using hosted Kubernetes.