Module 0 | Introduction to Business Intelligence
Hi all:
Module 0 of this course introduces fundamental concepts of big data and business intelligence. I would highlight the shift of paradigm in data science caused by big data as the main take-away in this unit. This shift involves the "datafication " of the world (many more things are being tracked), richer and more dynamic information (production of data and information is growing rapidly), and the "N=All" notion (the entire population of data is available so sampling is not required). I think there may be cases where communities are not yet well represented in the data (so models may be biased), but this might change in the future as IT becomes more popular and pervasive in human life.
I found two articles, MIT’s Big Data Gets Personal and MGI’s Ten IT-enabled business trends, to be especially interesting, not only for the huge possibilities of business intelligence, big data and machine learning for human development, but also for the ethical questions that these technologies pose.
In a comment to one of our blogs, I have reflected on the potential of anticipatory systems and how they might affect fundamental rights. Thinking of how risk modeling is currently used in the insurance sector to determine premiums based on predictors such as age, gender or marital status, it made me wonder whether anticipated systems, in the absence of legal civil protections, might cause the segregation of communities based on predicted health or behavior. The potential of anticipatory systems being used against presumption of innocence does not seem so remote these days in a world with so many different ways of understanding the role of the government and the citizens. In the face of globalization, global policy solutions should be developed at the international level to address these potential issues, though technology seems to be evolving faster than information technology law.
Some of the most recent cases around this question have been summarized on Wired:
Sidney Fussell, An Algorithm That ‘Predicts’ Criminality Based on a Face Sparks a Furor, 06-24-2020
I also seem to see a risk in the reinforcement patterns that recommender and anticipation systems may encourage. Using a terminology that comes from heuristics and evolutionary computing, these systems seem to incentivize exploitation of known behavior before exploration of new knowledge (e.g. new experiences or perspectives on a question). The notion of augmented social reality also seems to be oriented to exploiting (social/cultural) patterns that are already known, rather than helping explore new ones. Classification of consumers based on a set of features might reinforce divided communities, as opposed to let interaction and exchange of knowledge happen between different communities. While exploitation of knowledge enables the optimization of a solution, exploration allows the search of new solutions as well as diversification of knowledge. I think one task should not be hyper-developed at the cost of the other.

Hi Fernando,
ReplyDeleteGreat post! I agree with your point on reinforcement patterns. Anticipation systems could create self-fulfilling prophecies by rejecting the outcomes that don't fit into the parameters of the system. At the same time I feel like humans often tend to act this way as well. Somehow in the case of a "soulless computer" the ethical problems seem much more pronounced.
Hi Fernando,
ReplyDeleteI really enjoyed your post and I think that you bring up some great points. I also am concerned about anticipatory systems and predictive policing, which I discussed in my blog post. I am happy that the Coalition for Critical Technology and other similar groups advocate for considering the social good when touting developments such as predicting criminality. I'm also pleased to see that many call physiognomy a pseudoscience.
I think that as the data gathered on individuals continues to grow, policies and laws need to be updated to protect the ways in which our data is used and ensure there is consent. In this regard I feel that Europe is much further ahead than the US with GDPR. I think it is important that we remain vigilant and as we develop new technologies and make advancements we continue to ask what the cost will be.
Hi Fernando,
ReplyDeleteThanks for the great insight. We share some ideas and concerns about anticipatory systems. I think a course such as this needs to bring the good with the bad. While a number of the readings this week were focused on the good of such systems there wasn't much rhetoric on the dangers of these systems. I think studying both is the only real way to make beneficial progress.
I enjoyed your analysis on how these systems could continue to contribute to divided communities. This vaguely ties into a philosophical concept called self fulfilling prophecy.
When I did debate, we used this argument many of times.