The social costs of Big Data. One of the main concerns of Big Data usage centres around the predictive powers of Big Data. This means the use of Big Data to predict people's behaviour, desires and needs. On the one hand, prediction is used to improve product offerings. For example, we may be advised to buy a book that we didn't know existed, or listen to a song that we had never heard before. Search engine providers are developing capabilities to offer information to users before they are even aware that they desired it. In these cases, our preferences are being analyzed in order to anticipate our desires and ultimately sell more products. But how far should this be taken? One famous case involves a shop which sent coupons for baby products to a pregnant teenager. By analyzing her historical buying data they were able to predict that she was pregnant before she had event told her own father. He found out about his daughter's pregnancy from the advertisements sent to her. This case raises a big privacy concern. The increasing use of algorithms for predictions increases the risk that private information which was not willingly disclosed is nevertheless extracted. Another big concern is that Big Data analytics maybe intentionally used to reduce the person's range of future actions. This is known as pre-emption, and it can have far reaching consequences. For example, you could use data from employees to analyze: is this an employee who will be long term employee or is it the type of person who loves to change employments? What kind of employee is this? And if they make this prediction about me, for example me as a person that would say, oh this is a guy who will go very early to another form then they might conclude out of it don't give me the educational services they would give to the long term employees. Pre-emption may also have huge implications for the way that we judge individuals within society. At present our legal system imposes penalties or punishments on wrongdoers only after they have committed a crime. Big Data could shift their attention to preventing wrongs before they occur. Individuals who are classified as high risk to commit a certain crime might for example find themselves being barred from travel. At the moment so-called predictive policing is already being used to predict when and where crime is most likely to occur, but it will not be long before we can predict who exactly might commit the crime. Prediction and pre-emption lead to a series of ethical and the end very practical questions, about how you can and should work with Big Data and analytics. For example, what kind of prediction is ethically acceptable? When is a prediction profound or strong enough to justify consequences? What restrictions on individuals can be justified based of predictions? Which information should be allowed as a basis for predictions? And more broadly speaking, do we, as a society, desire this kind of change? If everything is described in an algorithmic way it would change the world in a way we do not know now. So do we really want to be organized by algorithms, by perfect algorithms? Is it really that how he want to organize our social life, our business life, our private life? That everything and more and more things are well known in advance. Another final thought to bear in mind is that it is possible for predictions or pre-emptions to be based on data that a user did not willingly disclose. Let's have an example, let's have a look on data we crawl a lot of websites for instance, we crawl lots of websites from specific sources and now we can link specific information inside the text that we have in order to gain more information, well actually the right term is implicit information, information that is actually not stated explicitly but we can find if implicitly by linking different documents together from different sources. So, for example, if we take some portals like Linkedin, so we are talking about the business domain, if we take personal websites like blogs for example, we talk about the private field, the private domain of the specific user. So even if his name is not inside of these texts we can gain information of this person, it means we can learn the behaviour of this specific person, we can find specific patterns that actually describe this person without even to know him. We can say it's some kind of fingerprint that we can gain out of lots of data. And what kind of information could this fingerprint reveal? Think social statistic variables like age, gender, or for instance the personality of the person, or the native language of to he person, we can infer it really from the text or even more, physical aspects of the persons, also possible to infer from text, for example if someone is left or right handed. And many many other things. In conclusion, Big Data and the power of analytics open many new opportunities but can lead to so-called social costs. Two large concerns are: One, prediction, when people's preferences are predicted to an extent that privacy is at risk. Two, pre-emption, when people's future actions are diminished based of pre-emptions about who they are and what they are capable of doing. The constantly growing quantity of data will make this type of prediction and pre-emption easier. By linking information from different sources it is possible to infer a lot about a person, without necessarily knowing who they are. All this raises questions, how working with Big Data can happen in an ethically and acceptable way.