Thursday 1st March, 2012
10:40am to 11:20am
So much of the privacy discussion is about data collection and access, fears of a future dystopia, and the complexities of law. There seems to be a real vacuum around how societal norms should be mapped to rapidly growing capabilities of big data. What’s difficult about some of these big data use-cases is that even the intended and approved uses of data can lead to decisions or actions that negatively affect specific individuals or groups. These can range from effects on safety (by making a person more easily identifiable or locatable), to fairness (because the purpose of the application is some form of discrimination), to autonomy (by limiting individual choice or through subtle manipulation).
Regrettably, data professionals (e.g., scientists, engineers, designers, analysts) are left in a “don’t ask don’t tell” privacy conundrum where no framework exists to assess the societal impact of their work. Such a framework would need to go beyond default “procedural protections” (e.g., the Fair Information Practice Principles) to “substantive protections” that evaluate possible product impact at design-time and track actual impact as the product moves into the market.
This conversation will address, from academic and industrial perspectives, specific use-cases within people search, background checks, online advertising, and voter targeting. Through these use-cases, we’ll explore the feasibility of a “responsible innovation” framework that might guide data professionals.
Sign in to add slides, notes or videos to this session