Hello and welcome to this course in which we're talking about Python for network-level active defense. In this video, we're going to be talking again about burn-in. We introduced burn-in in the previous video, where we discussed how it's a technique for making a decoy account or system look more believable. By performing activity on the system, we can create system-level or network-level artifacts that help to essentially sell the decoy to an attacker. In this video, we're going to talk about implementing burn-in using Python code and focusing at the network level. We're making the assumption here that an attacker has the ability to monitor an organization's network traffic, whether inside or outside of their network. We're creating fake network traffic from a certain system or user account to make it look like an active and actively used system or account on the network. In reality, this is the code that we deploy on a decoy system within an organization's network where we want the attacker to spend and waste their time. Here on the screen, we have the Python code that we're going to be using here. Starting at the bottom of the screen, we see that we're going to be calling a function called browsing session, which is defined up here. This browsing session function is designed to emulate a user that might be browsing the Internet essentially. What we're going to do is we're going to use a file called sites.txt to store URLs that we want the user to browse to. In this case, most of our URLs are pretty simple. Google, Facebook, Twitter, and YouTube. But we could tune this list further to make it look more plausible to an attacker or to tailor the fake account that we're using here to a particular user persona. For example, maybe if we were pretending to be a developer, we'd have sites like Stack Overflow, GitHub, etc, listed here as well to emulate on performing research or looking at code samples on GitHub. From this list of sites, we're going to be reading in the available URLs into a variable called sites and we'll be accessing those sites randomly throughout our browsing session. In our browsing session function here, we're going to emulate our user's browsing session. What we're going to do is have a random probability that the user is going to continue browsing. Whether that's visiting the same site again, visiting a different site, etc. In this case, we're saying at any given point, our click-through value is 0.5. So a 50 percent probability at each iteration that we're going to continue browsing the Internet at this point. Obviously, we can tune this so maybe we have intense user sessions that are longer-lasting, or maybe someone that just visits a single webpage periodically. Inside of our loop here, we're going to use a get URL function to retrieve one of the URLs that are listed here on our sites.txt file that is stored in our sites variable here. We're using random.randit to create an index between zero and the length of sites minus 1, which is essentially the range of possible indices into our sites list. We'll grab whichever site is at that particular location in the list and then use our strip to clean off any trailing whitespace that might still be there from our text file. With that URL, we're going to call make request, which is going to take advantage of the requests library. This requests library just makes it very easy for us to perform HTTP GET requests. We can say request.get URL. Normally, this request.get function will return the HTTP response. However, we don't really care about it in this particular case. We could modify this code so that we take the response, harvest some internal link from it, and then visit that internal link as the next URL in our list. But we're not going to do that here. We're just ignoring the response entirely, which we can do by just using an underscore here. Then we'll return from make request once the request is made. Once we've made a request, we're going to sleep for some random amount of time, based off of a variable called sleep time. This is taking into account the fact that a user needs to actually spend some time on a particular webpage if they're actually reading it. In this case, we set sleep time to one, but we could set it to maybe something like five minutes or 10 minutes or whatever to emulate how long you might spend on a page before you visit the next page. By keeping that random, we say maybe you clicked on a page, and then if you have very small random number, it means that you clicked on it, realized it was the wrong thing, and clicked back to look at something else. Versus maybe you spent some time actually reading the page. Then we'll iterate through this until we hit a case where random.random is less than our click-through percentage. The goal here is just to create network traffic that looks plausible for a user browsing the Internet. One of the advantages that we have is that HTTPS is fully encrypted, meaning that an attacker could only see a little bit of information about the network sites that were visiting. For example, they might be able to know that we're visiting Google servers. But they won't necessarily be able to tell the difference between the Google homepage or a particular set of search results. Which is helpful because that means that we can keep visiting even a small set of pages and have a state that looks fairly believable and like an actual browsing session. In this particular case, by using request.get we're mainly creating network level artifacts because we're not using a real user browser, and so we're not going to be creating cookies or other artifacts that might show up in Chrome or Firefox. But we could easily modify this to call Firefox from Python if we wanted to consistently use Firefox in our requests and build up those system-level artifacts. Additionally, we can make other tweaks to this code that would potentially make the pretext more plausible. Maybe adding waiting to each of these sites to say the probability that it should be selected. Or maybe a stickiness value that says, well, if I'm on Google, I should choose Google again with 75 percent probability and have a 25 percent probability of switching to a different site. Emulating, oh, well, I'm clicking through Google search results versus I got bored and went to watching YouTube video or something. But in general, the goal here is just to make a particular computer look used with a reasonably plausible history of network traffic. Let's see what this would look like if we were monitoring the network traffic for this account. I'll minimize this. On the left here, we're going to run the code. On the right here, we have Wireshark, which if you're not familiar, is a great tool for a network traffic captures and analysis. I'm going to start a capture on the WiFi adapter here, which will allow us to start seeing traffic. Obviously, there's some that are associated with the computer itself. But if I do python BurnIn.py, the traffic created now is more associated with our particular session. I'll run it again, and more traffic will be produced. Over time, we can build up a network traffic profile that looks like a legitimate user account. Currently, as we look at our code, we only call browsing session once per session. However, we could easily modify the code to be like while True, run a browsing session, and then maybe add another sleep command. Say maybe sleep for 60 between sessions or something to emulate someone that goes away from the computer and then returns to it again. By keeping this running constantly, we can periodically create bursts of network traffic that emulate a legitimate user on the system. This is valuable for active defense because it might allow us to sell the authenticity of a decoy computer or profile. Thank you.