Fully-funded (UK/EU) PhD Studentship in Argumentation and Social Choice for Bridging Online Communities

We have a fully-funded (3.5-year studentship including fees and subsistence for UK/EU students) position on "Using argumentation and social choice to bridge online communities with different value systems" available, deadline for applications is 24th March 2017.

We have a fully-funded (3.5-year studentship including fees and subsistence for UK/EU students) position on "Using argumentation and social choice to bridge online communities with different value systems" available, deadline for applications is 24th March 2017.

Project description

As observed in recent political debates, the user bases of global-scale social networks such as Facebook and Twitter tend to fragment into communities with incompatible views. This not only leads to an entrenchment of biased views and proliferation of inaccurate information, but also contributes to escalating latent conflicts between communities and their different value systems.

The project aims to use a combination of defeasible reasoning and social choice techniques in order to develop advanced AI methods that make such conflicts easier to detect and analyse, and which support reconciling conflicting views among different communities where this is possible.  The envisaged contribution of the project will be to extend traditional methods from computational argumentation systems by voting theory, detecting social support by applying argument mining techniques to real- world data gathered from social networks.

In traditional argumentation systems, if one party supports statement B because it assumes that A is true and that A implies B, an opponent trying to refute B can always come up with arguments such as C is true and C implies not B. However, whether or not this will be a successful attack depends on how much support the opponent can garner for statement C or C implies not B.

The project will develop a formal and computational framework that will allow for selecting such attacks in a strategic way by detecting which arguments are (often implicitly) supported by which camp(s). This will be achieved not only by looking at the logical structure of the arguments, but also by considering how many “votes” each of these arguments would get from typical members of each camp. The project aims to contribute to more transparent and “conflict-defusing” social web applications, but also constitutes a novel combination of logic, games, and data/web science techniques that has not been explored much in the literature.

The project will be supervised by Michael Rovatsos, and proposal is closely related to the current EPSRC-funded UnBias project which aims at understanding users’ views on algorithmic bias in data-driven applications. It will explore novel opportunities for creating more responsible and “fair” AI- based social computing systems that arise from additional semantic processing.

 

Candidate profile

Strong candidates will have a very good first degree and/or Master's in computer science, AI, or a related discipline, and solid background in general AI techniques. Desirable additional skills include knowledge representation, argumentation, social choice, multiagent systems, game-theoretic AI, text mining, information extraction, and natural language processing. They should be equally comfortable with formal methods for modelling systems, algorithm design and implementation, and working with real-world data and Web systems. 

How to apply

You should submit a standard Informatics PhD application, selecting "PhD Informatics: CISA: Automated Reasoning, Agents, Data Intensive Research, Knowledge Management - 3 Years (Full-time/Part-time)" as the PhD programme at this page. Your application should state the name of this project as research topic, and include a research proposal (up to 4 pages) that fleshes out the methodology and workplan you would want to pursue for the topic. Please e-mail Michael Rovatsos if you are interested in applying,in good time before the deadline (24th March 2017).