Let's examine each step a little more closely, by clarifying the roadmap a process or product follows as it becomes more automated and reaches machine learning. Think about some examples from your company or organization as we review each step. Have they gone through any of these steps? How did it work out? Where did the company spend most of its time or where did it skip entirely? And do these phases seem like a constructive framework to think about? Or do they seem more like obstacles? Let's first examine the individual contributor phase. This is a great opportunity to prototype. Almost every product or process starts here. It's cheap, flexible, great for trying out many ideas, failing fast and learning from those failures. Skipping this step is risky, because your organization might not want to invest in scaling up the effort. How are you going to convince your boss that you need 100 people or you need to build a huge software system, if no one's even attempted the most basic prototyping? Or perhaps your organization does invest in scaling up without first seeing a prototype. Product leads could make costly, incorrect assumptions that are hard to change later. Lingering in the individual contributor phase can be dangerous too. Imagine that you've got one person who's very skilled at their job, and then leaves the company or retires. All that organizational knowledge leaves with them. It's been a problem when no one else can perform that process. Also in this phase, you can't scale up the process to meet a sudden increase in demand? One person can't immediately start doing the job of eight people. So what happens? We move on to the delegation phase. Here we increase the number of employees working on some business process to hundreds or even to millions. This allows us to gently ramp up the investment while we maintain most of the flexibility of the previous stage when we only had one person. So what are the dangers of skipping this step? Well, if you skip this step, you're never forced to formalize the business process and to define success. Think about outsourcing as an example of delegation. If you can't explain exactly what you need to happen in this business process to someone several time zones away, you likely can't explain it to a computer. Computers have even less context about what you're trying to accomplish than a human. So this is a great halfway step to formalizing business processes. Since delegation takes investment, you can also get organizational buy in for what success looks like. Delegation is also a great opportunity for product learning. Human responses have an inherent diversity. For example, in a call center, each human answers the phone a little bit differently, and then they have different customer outcomes. Then in the organization you can review all the data to understand what's working and what's not. Finally great ML systems will need humans in the loop. This is your opportunity to identify who you need. At Google, we have many ML systems that generate a lot of value for the organization, for our customers, and for our end users. Almost every single one has a team of people reviewing the algorithms, reviewing their responses, reviewing where they get confused, and doing random sub samples. If your ML system is very important, it should be reviewed by humans. You should think about ML as a way to expand or scale the impact of your people, not as a way of completely removing them. That's a very high expectation for an ML system to meet. What are the dangers of lingering in this phase too long? Well, you're paying a very high marginal cost to serve each user. When people answer phones, it's expensive. If you have someone sitting in a bank waiting for someone to walk in to open a mortgage, that's also expensive. The more voices you have in your organization, automation is less possible. This creates a kind of organizational lock in because you have too many stakeholders. So this leads us into the digitization phase. Which refers to getting computer systems to perform the mundane or repetitive parts of the process. In general, digitization is a part of automation. We think of automation economically as a way to trade up front investment for a lower marginal cost or run rate. Automation gets us a lot of things. And maybe you're thinking, wait, but we don't want to get rid of people. And you're right. But we want to scale their impact. We want to give our users a higher quality of service, and automation is a great way to do that. But, because it involves so much upfront investment it comes with its own risk. So what are the dangers of skipping this step? Even with a great machine learning algorithm, you'll need all the infrastructure of this step to be able to serve your ML at scale. Remember, machine learning performs some core almost new miracle task. But it doesn't serve that task in a website or have unit tests of its own. And all of the things that come with software you also need for ML. At most it will just replace a little piece of your otherwise very large software stack. Also if you skip this step you might start to untangle an IT project, which we may call software with an ML project. If either one of them fails, the whole project fails. Organizationally, it's then very tricky to say what really happened. You may find that management is disincentivize for future investment, because it's so hard to untangle and understand what's going on. We always encourage people to try to do an IT project, show its own success and milestones, and then see ML there's something extra. What's the dangers of staying here too long? Well, the other members of your industry are collecting data and tuning their office and operations from these new insights. This is giving them an advantage and constructing a very positive feedback loop for them. After digitization we start to think about big data and analytics. Here, we measure everything about internal operations and external users. We want to make it easy to review, summarize, and deep dive. This is a great opportunity to pause and reiterate your definitions of success and then de-tune software algorithms that we described in the previous step. So why is this a good time to redefine success? From the initial individual contributor phase until now, we've been using standard practices to hypothesize what our users want. And wherever possible apply data. But now that we're in the big data phase, we've automated the core process with digitization, which means that we can extract an incredible amount of data. With this new data, we can reassess our original definition of success, and determine if we are actually serving our users correctly. What are the dangers of skipping this step? Well, if you try to go from software right into ML without ever generating insights manually, you're going to run into some obstacles. First, you won't be able to train your ML algorithm, because your data isn't clean and organized. If you can't make a histogram of your data, your algorithm can't effectively make that histogram either. And more or less that's what your algorithm is doing. It's making many, many plots, and performing regression on them. If you can't make that chart, neither can your algorithm. Second, if you skip this step, you can't build a measure of success. So it would be difficult to tell if your ML algorithm is actually improving things. And what happens if we stay here too long? It's less risky than the other steps. For example, Google Search stayed here for years. Many people think that Google Search is the pinnacle of machine learning and natural language processing algorithms. And maybe it is now, but it wasn't for many years. It was actually a hand tuned algorithm. You can get far without handing over everything to a ML. But of course without a ML, you're limiting the complexity of problems you can solve and the speed at which you can solve them. Finally, let's talk about the last stage, the machine learning phase. Here, we'll complete that feedback loop we talked about before. We're going to automate each of those blocks between measuring success and tuning the software algorithm. Eventually this will outpace humans ability to handle the number of inputs and the corner cases involved in your real world use case. Generally at Google, we expect about a 10% key performance indicator, increasing Google products in addition to human hand tuning, just from a ML's ability to handle each of those little details so well. And of course, we're going to escape the limitations of human cognition in solving our business problem. We're going to get faster answers and more nuanced treatment of details. Just from one brain learning from billions of interactions every day. For example, Google Search can see how one user researching for new terms and apply that to searches happening for another user, as it learns about the content from the world around it. In your organization, this will happen too, as you have one ML algorithm that's learning from many disparate interactions around it.