This is a ‘take home’ final exam. It is to be completed individually. The exam consists selected questions coming directly from the issues identified at the end of each of the Princeton AI Ethics Case Studies. The exam is formatted as six equally weighted questions (one from each of the Princeton AI Ethics Case Studies). Each of these questions will have multiple parts, and each of these parts should require one to two paragraphs of writing to answer adequately.

I would expect that your answer to each question will be based on the reading of the referenced case. However, you might find it helpful to consult information external to case in formulating your answer. Feel free to incorporate any such external information that you deem relevant into your answer. Where applicable, feel free to draw upon discussions that we had in the course (BSAN 407) during the semester.

Similar to many of the discussions we have had throughout the semester in BSAN 407, it is likely that there is not a clear right or wrong answer to many (perhaps all) of these questions. The exercise here is for you to think about the question and to formulate a perspective of your own that could be used to contribute to any ongoing conversation surrounding the issues identified.

Again, to reiterate the point from which we began the course, ethics is essentially a societal agreement about right and wrong. Such agreements are necessarily the outcome of broad conversation amongst the members of the society. Thus, make your own contribution to the conversations surrounding the issues identified in the Princeton AI Ethics Case Studies.

 

QUESTIONS:

1.) Princeton AI Ethics Case Study #1: Automated Healthcare App

· Issue:Transparency

Can and should Charlie’s ends and means be made transparent to individual users? In your response, consider both the narrower and the richer definitions of transparency.

 

2.) Princeton AI Ethics Case Study #2: Dynamic Sound Identification

· Issue: Neutrality

How should companies like Epimetheus decide which values to promote through its use (or non-use) of particular categorizations?

 

3.) Princeton AI Ethics Case Study #3: Optimizing Schools

· Issue: Autonomy

Should Hephaestats provide students with their risk profiles? Should students have a right of appeal? Should they be able to opt out of being assessed? Would it be possible to include them in decisions regarding the design and deployment of Hephaestats, and if so, how?

 

4.) Princeton AI Ethics Case Study #4: Law Enforcement Chatbots

· Issue: Research Ethics

If talking to a chatbot makes an individual more likely to commit a crime, does that individual bear full responsibility for the crime? What is the research team’s culpability, if any?

 

5.) Princeton AI Ethics Case Study #5: Hiring by Machine

· Issue: Fairness

Given the goal of selecting job applicants at Strategeion, to what extent can and should PARiS be programmed to reflect the values of fairness? How could this be operationalized technically?

 

6.) Princeton AI Ethics Case Study #6: Public Sector Data Analysis

· Issue: Inequality

How might a crime prevention algorithm be designed to minimize inegalitarian outputs based on biased data? If tech solutions are unavailable or insufficient, can you imagine public policy solutions that could mitigate against unjust treatment of poor and minority neighborhoods?

 

 

Found something interesting ?

• On-time delivery guarantee
• PhD-level professional writers
• Free Plagiarism Report

• 100% money-back guarantee
• Absolute Privacy & Confidentiality
• High Quality custom-written papers

Grab your Discount!

25% Coupon Code: SAVE25
get 25% !!