Data Ethics Research Project Presentation
Final Paper (Six Pages minimum, Due May 5th) is a formal writing assignment, so please plan to draft and revise it, and to proofread for structure, grammar, and clarity. You should choose a topic that’s of particular interest to you: this may be one we’ve covered in class (or you’re free to choose something we haven’t discussed (as long as it pertains to issues raised by the collection and use of data in society).
The basic model for this paper is a persuasive, or argumentative essay centered around an ethical problem or issue relevant to Data Ethics. Once you have a general topic in mind, you may begin to formulate a thesis—this is the main point or position that you’ll attempt to prove or support with the essay as a whole. The thesis should be clearly stated, narrowly focused, and as strong as you can reasonably support in a paper of this length.
- Weak thesis: easy to support and not as interesting, perhaps vague.
o Examples: “Ultimately, everybody has her own view on this matter,” or, “Utilitarians and Kantians would reach different conclusions about torture.”
- Strong thesis: more difficult to support (requires arguments) but one which makes a specific case about the issues and theories involved.
o Examples: “This case shows that the permissibility of torturing innocent individuals to gain useful information represents a weakness of Utilitarianism as a theory, and is better understood through Kantian principles,” or, “The cold disregard for the special value of personal relationships in deciding whether or not to break a law to aid a friend demonstrates an important similarity between Utilitarianism and Social Contract theory.”
The Research Project Presentation (Due May 1st) is meant to be a summary/presentation of your final paper topic, shared with the rest of the class via Moodle on a new forum suitably called, “Research Project Presentations.” This forum differs slightly from the others in that it has a blog-type format to provide a bit more flexibility in the styles of posts. You may post a short, written summary, or you may choose to augment this with images, links, video, a Powerpoint—anything you think might be helpful to convey your topic and the conclusions you’ve reached (the thesis you defend in your paper). Each student should make a separate post to fulfill this requirement (click on “Add a new topic”). Please make it a summary—don’t simply attempt to upload your paper (as interesting as it may be!).
Some topics we discussed:
Utilitarianism, Kant Deontology, Virtue Ethics, contract ethics
Hacktivism:
Please read the excellent overview of Hacktivism, “Is Hacktivism the New Civil Disobedience?” by Candice Delmas. (readings and resources folder)
For a creative example of resistance, here are some artists protesting facial recognition surveillance in London:
https://www.bbc.com/news/av/uk-51913870/facial-recognition-artists-trying-to-fool-cameras
There is also a suggestion that we can shuffle our data profiles by planting false digital footprints:
Artificial Intelligence:
“Ethics of Artificial Intelligence,” Nature, Vol. 521, May 28, 2015. (Readings and Resources folder)
“Robots at War and at Home,” by Shannon Vallor (Readings and Resources folder)
“Artificial Intelligence: What’s Human Rights Got To Do With It?” Christiaan van Veen, Data & Society Research Institute, May 14, 2018:
“Why Asimov’s Three Laws of Robotics Can’t Protect Us,” George Dvorsky, Gizmodo, March 28, 2014.
https://io9.gizmodo.com/why-asimovs-three-laws-of-robotics-cant-protect-us-1553665410
Further readings that might be of interest:
“The Real Cyborgs,” Arthur House, The Telegraph, Oct. 20, 2014. For those interested in “Transhumanism” or “Post-Humanism,” and the integration of tech with the human body:
https://s.telegraph.co.uk/graphics/projects/the-future-is-android/
“The AI Revolution: The Road to Superintelligence” and “The AI Revolution: Our Immortality or Extinction,” (Parts I & II) Tim Urban, Wait But Why, January 22 & 27, 2015.
https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html
https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html
“An AI Comes Out: Speculative fiction about the future of AI Systems,” Bex Hong Hurwitz and A.B. Ducao, Data & Society Research Institute, Aug. 30, 2019.
https://points.datasociety.net/an-ai-comes-out-d0233ba0ae4d
This week in Data Ethics: “V” is for Veracity. We’ll be doing some readings on the ways digital platforms impact the public sphere, paying special attention to false speech online (I prefer this term to ‘fake news’ because it’s broader, and hasn’t been as abused).
1. “Why Can’t We Agree on What’s True Anymore?” by William Davies.
https://www.theguardian.com/media/2019/sep/19/why-cant-we-agree-on-whats-true-anymore
2. “Media in the Age of Algorithms,” by Tim O’Reilly for a more optimistic account of the ways that algorithms can help correct for the prevalence of false speech.
https://wtfeconomy.com/media-in-the-age-of-algorithms-63e80b9b0a73
3. “How Russian Trolls Used Meme Warfare to Divide America,” by Nicholas Thompson and Issie Lapowsky:
https://www.wired.com/story/russia-ira-propaganda-senate-report/
4. “The Great Hack: the film that goes behind the scenes of the Facebook data scandal,” by Carole Dadwalladr. This is an article about the findings of the Netflix documentary project, “The Great Hack” that Lilly mentioned in class.
Examples of others presentation:
Example 1:
My paper will argue that it is legitimate to apply social contract principles to analyse the relationship of the FAANG companies to their customers and users. It is fair to analyse the FAANG companies under the social contract theory because the government has not reigned in the growth of their power by either regulating them or breaking them up. Only recently has there been discussions in DC about whether to break up the FAANG companies to essentially reduce their power (Lohr, 2019). When the principles of Social Contract Theory are applied to the FAANG companies, the FAANG companies fail in their obligations under social contract.
I will address how they fail in their obligations under social contract theory because there is a power imbalance, lack of consent, and inability of users and customers to opt out of services provided by FAANG companies. Social media and search engines offer a service for “free”, but at the sacrifice of the user’s privacy. Is this a social contract? It would be if there was the ability to opt-out, however, this is not realistic. In order to successfully opt-out of any digital data collection, one would not be able to participate in society (i.e. you can’t get a job, or have a credit card). If not consenting means you get harmed, then this is not a social contract.
The FAANG companies fail in their obligations under social contract theory because they fail to provide safety/security. Examples of failure to provide safety/security result from manipulation and hacks. In the article Privacy under Surveillance Capitalism, Silverman touches on how users and customers can be manipulated: “These variations in privacy may lead anyone—from advertisers to police officers—to manipulate people. In short, they know more than you. The process of automation on a vast scale leads to thoughts of what mass-scale coercion, enabled by this flow of data, might look like. Not all forms of suasion are equal” (Silverman, 157). Hacks can lead to stolen credit cards, Social Security Numbers, and identity theft.
The final fail in their obligations under social contract theory I will touch on is unequal treatment of “citizens”. Sophisticated or affluent users are able to protect their data better: “Privacy itself becomes a boutique good, affordable to those who know how to navigate this tangled landscape of best practices, firmware up-dates, threat assessments, cryptographic keybases, and virtual private networks” (Silverman, 160). And premium tiers get favoured treatment.
Example 2:
My paper will look into two cases dealing with the YouTube algorithm and censorship.
The first pertains to YouTube Kids. YouTube Kids is a popular platform for kids to watch video clips of their favorite cartoons and TV shows. By design, it is supposed to consist only of child-friendly content. However, some people have found ways to slip past the filters. Disguised as cartoons, inappropriate videos make their way on to the platform, oftentimes depicting popular cartoon characters, such as Spiderman or Elsa, in obscene or violent circumstances. This article describes examples of some of the videos that are being shown to children. These are not appropriate for children, but because YouTube Kids is overseen by an algorithm rather than humans, the creators are able to use tricks to get by. The videos are often tagged with words like “learn colors” and “education” and are independently animated to avoid copyright restrictions. This allows them to slip right by the algorithm. Youtube has taken steps to censor the videos by changing their guidelines and adding extra parental controls.
The next case happens on the main platform, YouTube. YouTube has an autoplay feature, which allows the next recommended clip to be played automatically, once the current one is finished. The factors that go into choosing the next clip are referred to collectively as the “YouTube algorithm” and are the deciding factor for what becomes popular and what doesn’t. As many people have discovered, the algorithm tends to favor more shocking videos because conspiracy fosters entertainment. This, however, leads people into rabbit holes, where the algorithm starts recommending conspiracy videos, the person watches them, so YouTube recommends more radical ones, and so on. And perhaps because the titles on right-wing videos tend to be more shocking, those are the ones that are being recommended more to people. In turn, YouTube has started removing videos that they are deeming harmful, which includes these videos, since many of them contain hate speech. However, the creators of these videos are claiming that they are just stating their opinions, and by doing this, YouTube is censoring their free speech, especially since most of the videos that are being removed are specifically right-wing. This is where the conflict lies.
I will look at each of these cases, the first being a more clear-cut case of when censorship should perhaps be used, and the second being more difficult to decipher, through the lens of John Stuart Mill’s writings on free speech. I will also take into account ideas from utilitarianism and Kantian deontology, especially in the first case where the intentions and consequences are less muddy than the second. In the end, I hope to answer the question of whether it is acceptable for YouTube to be censoring these videos at all and if so, where the line can be drawn to determine which videos deserve to be removed.