[MUSIC] Welcome back. In this course, we're learning how to use TensorFlow to develop deep learning models on mostly relatively small and academic data sets. But of course, deep learning has also had enormous success in industry applications. In this video, I'm joined by Doug Kelly, a data scientist from Google Cloud, to talk about how Google's industrialized deep learning. Doug, thanks for joining me today. >> Thanks for having me. >> Perhaps you'd like to start by telling learners a bit about your background. >> Absolutely, so my name is Doug Kelly and I work as a data scientist on Google Cloud where I primarily work on decision support analytics projects in building machine learning systems on top of TensorFlow extended, to improve the resolution, speed, and quality of technical support cases for GCP customers. This has involved numerous deep learning projects, including predicting the resolution time of the case to power interventions, predicting the product featured issue tags of support cases as they come in. And also combining deep learning with interpretability techniques like integrated gradients to decompose complex business metrics into key drivers to prioritize operational improvements. And prior to Google, I actually oversaw data science content strategy at Coursera. So it's a real pleasure being back here and joining you in front of the camera after spending so many years behind it. And prior to that, I also worked in a number of positions in the finance and utility space as well. Where I got to experiment with neural networks back in the day, including on text clustering with Dr. Beck and also RNNs for time series forecasting. >> I think sometimes we have to remind ourselves that this current wave of deep learning research is still relatively new. Many people will look back to the Alex Net success in the image net competition back in 2012 as sparking a surge of interest in deep learning. But of course, that really wasn't that long ago. How is this rise in deep learning impacted your career? >> I would actually credit my intuition for neural networks to auditing machine learning course on Coursera back in 2014. I wrote my first neural network in Theano in graduate school. And to be honest, I didn't like them at first. I found them incredibly hard to debug and tune, to understand. And as I was primarily working with structured enterprise data with just about any other machine learning approach out there boosted trees and insight kit or I was achieving much better results out of the box. I was in the depths of despair during a class project when a classmate of mine introduced me to their ace in the hole, and that was Kerris. From that point on I was Kerris first and this was just about the time that TensorFlow was coming out, so before TensorFlow even. And really what Kurtis did is opened up a whole new world of working with text and sequence data, which abounds in the enterprise which I had never previously worked with. For learners that are interested in learning more about the history of deep learning over the past ten years, I would highly recommend that they check out the Heroes of Deep Learning Series from deeplearning.ai. Where they can hear about many of these developments from the researchers that pushed this revolution forward themselves. Many, many researchers would likely highlight events like Alex Net in 2012. And also maybe maybe Bert in 2018 for kicking off the transformer revolution in natural language processing. I would actually suggest that there's also a complimentary timeline on the applied side as well. From my perspective, you had libraries kind of a first wave you had libraries like Theano and scikit, being released in 2007 and 2010, respectively. Kind of a second wave kicking off in 2015 with libraries like Chainer, and Kerris, and TensorFlow. And perhaps now a third wave kicking off in 2017 with the release of PyTorch and kind of the convergence between these two frameworks, TensorFlow and PyTorch over the past couple of years. On the education side, many learners were brought into the field starting with kind of a first wave in 2012 with the rise of Mukes. So you had Coursera and Udacity launching in 2012. You also had perhaps a second wave kicking off in 2016 with fast.ai taking an innovative approach to teaching machine learning. And also Kaggle, evolving from just a competition platform to really an education resource and hitting over 1 million learners all the way back in 2017. From 2017 to 2019, there was also a focus on end to end machine learning frameworks as well to now a shift the focus from research to production. So you have frameworks like TFX from Google, you have Michelangelo from Uber, and many other frameworks designed to accelerate the deployment of machine learning more broadly in industry. >> So Google, of course is well known as being one of the leaders in AI research and products. What can you tell us about how machine learning is done at Google? >> I believe Google is one of the few companies in the world that is close to achieving industrial scale machine learning. What I mean by industrial scale machine learning is developing standardized machine learning system blocks and deployment processes such that you're reducing the marginal development time and cost to deploy machine learning solutions by just about any practitioner. So you have hundreds of practitioners now around the world better able to integrate machine learning into their applications. What makes Google so unique is its corporate strategy and commitment to becoming an AI first company. Alphabet's corporate structure draws from draws inspiration from many previous innovative research institutions. And in that it combines both researchers and practitioners closely together, working on a number of problems and taking risks across a wide range of different areas. So you have thousands of researchers in Google AI producing cutting edge research and machine learning, but also complimentary research in compute, networking, and hardware as well. To further accelerate transitioning research into products, there's thousands of software engineers at Google that are working closely with researchers to integrate this research into products. Nine of which now have reach of over a billion users. So in addition to that, they're also working on frameworks like TensorFlow and TFX to accelerate the deployment of machine learning across the company. Supporting this translation from research to products, you also have thousands of data scientists, UX researchers, product and program managers working behind the scenes to integrate machine learning into new and existing products and also to measure impact as well. I would also love to give a shout out to the community as well. There's a very vibrant community that exists outside of Google that is thousands of contributors and partners that are making contributions to Google's open source products like TensorFlow. And so you have this incredible ecosystem of contributions flowing back and forth through the company as well. And ultimately, it's a fascinating ecosystem to be just a small part of, an incredible driver of machine learning research and application globally. >> And finally, I'd like to ask you, what are some of the applications where deep learning has had a big impact at Google? >> Yeah, the best applications in my opinion have been those that really blend in behind the scenes, into products and also augment users. A couple, whether learners know it or not, deep learning is now working behind the scenes to serve you more relevant search results, send you more relevant ads, help rank videos for you on YouTube. Help understand and answer your questions as part of the Google Assistant and also helping billions of users every day compose emails, in fact. Since joining Google one thing that stood out to me is that there has really been even just a few years ago there was this narrative that deep learning is only good on unstructured data, so text, video. And if you have structured data, though, you're better off sticking to. Since joining Google, I perceive that narrative is beginning to change. So I've seen deep learning start to make inroads into many products and services that previously utilized more traditional machine learning approaches. For example in YouTube,there, they've started to incorporate DNNs directly into kind of some of these mixed and hybrid systems for candidate recommendation for video serving. And I also attended the Kaggle Days Conference this year where I was on hand to witness Google Cloud's auto ML solution, finish a close second behind the top Kagglers in the world on tabular data. So in summary, what I would leave learners with is that I certainly have perceived deep learning moving beyond just being another tool in the ML practitioners toolkit. Into perhaps something more genera, a different way for software engineers to build intelligent applications in the future. >> Doug, it's been a great to chat. Thank you for joining me today. >> Thank you. [MUSIC]