Visiting Peking Students
Information for summer research visitors in 2018.
Summer Research Visitor Programme - Peking University Computer Science to Edinburgh School of Informatics
We look forward to welcoming students from Peking University's Computer Science department to Edinburgh for an intensive research experience this summer.
Successful applicants will join one of our world-leading research groups for the summer.
The core programme will run for 8 weeks, from July 8th through August 30th. Accommodation will be arranged (see below) in University halls of residence. One of our PhD students from China will be available to help with settling in to Edinburgh on arrival, and for general help throughout the summer.
Before applying, the first step is to identify two or three projects that interest you and that fit with your background and experience. Possible project directors and topics (follow the links) are listed below.
Please do not contact the project director directly at this stage - send all queries to Dr Maria Wolters (WeChat: mariawolters), the programme coordinator.
|Malcolm Atkinson||Conceptual frameworks for data-powered collaboration||
Researchers need to collaborate by sharing data and methods across national, organisational, discipline and experience boundaries. A knowledge base will help them do this but they need to remain in control and not be distracted by technological details. Innovators exploring radically new approaches, and professional practitioners repeatedly using established working practices need to co-exist. The student will explore ontology engineering approaches, in the context of the DARE H2020 project.
|Vaishak Belle||Deep probabilistic tractable generative models||
Although deep learning has wide impact, we still lack tools to precisely understand the computational properties of, say, hidden layers, and its ability to model arbitrary generative models. Deep probabilistic tractable generative models such as sum product networks are an emerging paradigm to capture complex joint distributions, and are essentially graphical models in disguise.
See, for example, the PDFs below on the application of such models to responsible decision making and dealing with heterogenous data.
|Ethical and responsible decision making (with (deep) reinforcement learning)||
Responsible AI, and ethical decision making are major concerns in AI. While definitions and ideas differ on how to capture these notions, in going work we are looking at how to learn moral contexts.
We have been looking at how deep probabilistic models can be used for responsible decision making, perhaps in conjunction with reinforcement learning.
|Boris Grot||Dr. Grot’s lab conducts research at the intersection of computer architecture, operating systems and networking, with a focus on data-intensive domains (e.g., datacenters, machine learning). His research spans the entire systems stack, from processor microarchitecture to operating systems to distributed in-memory computing. Example summer projects include (1) applying machine learning to improve processor architecture; and (2) characterizing performance of serverless computing (e.g., Apache Whisk).|
|Heng Guo||Algorithms and complexity|
|Michael Herrmann||Evolutionary Algorithms||
Evolutionary algorithms have been studied for about 50 years, but it is only now they they become actually interesting since computers are sufficiently powerful to scan large search space and machine learning methods are available that acquire the information the problem structure in an application domain. In addition, also the need for such approaches is growing as they are expected to support explainability of AI algorithms, to optimise program code, or to solve design problems. In order to realise this potential, it is necessary to achieve a better understanding how these algorithms work. Dr Michael Herrmann is working on the analysis of evolutionary algorithms and their applications and implementation, in particular also in the context of robotics.
|Hugh Leather||Deep learning for compilers and operating systems||
Deep learning can replace compiler and operating systems heuristics, leading to faster, more energy efficient programs. We use DNNs to generate human like programs, for training further ML algorithms, for directly training heuristics, for testing compilers, and for much more. This is a brand new field with a huge amount of scope for exciting work to be done.
Generally any area relating to compilers, parallelism, operating systems, and runtime systems. We research a wide range of areas. Anything associated with improving the performance and energy of computer systems.
|Dave Murray-Rust||IoT and Machine Learning||
IoT products are embedded with sensors that transmit live data about their use and environment. This data is already being used to enhance digital products and manufacturing processes, but its impact on the design of physical products is less well; understood. A key challenge for designers is how to gather useful insights from data in order to accelerate the time consuming and labour intensive business of product research.
We are interested in how designers can collaborate with machine learning algorithms to make sense of data collected by smart connected objects. This means working with classifiers to label data according to different kinds of activity, using unsupervised machine learning to spot novel behaviours, programming natural language systems to talk with users and understand what they are doing, and developing techniques for co-creation of rich, labelled datasets.
If you are interested in any of those things, and want to work as part of project that is already running and collecting data, we’d love to hear from you.
|Subramanian Ramamoorthy||Machine learning and robotics for surgical applications||
Laparoscopic (keyhole) surgery is ubiquitous in surgical theatres around the world. However, surgery of this type requires significant skill levels, excellent eye-hand coordination, and surgeons who are able to work in constrained environments. In addition, laparoscopic surgery introduces challenges to surgical teams and assistants as there is limited visual feedback from inside the keyhole, and as a result are unable to anticipate surgeon needs and respond appropriately. This project theme comprises two projects addressing challenges in surgery through the use of machine learning and robotics.
Predicting surgeon performance in training tasks
In order to build the requisite skill levels, training kits like the eoSurgical simulator have been developed.
This simulator captures performance measures from a surgeon as they solve a number of tasks, by tracking the tips of their tools.
This project seeks to predict surgeon skill levels using a camera viewing a trainee surgeons hands. The student assigned to this project will be required to train a deep learning model to predict performance levels, given image sequences of a trainee surgeon’s hands. Students will be required to collect data for model training, by solving training tasks in the eosurgical simulator, and then build and train prediction models using this data.
Active viewpoint selection using a robot manipulator
Providing surgical assistants with a clear view of a surgery, without interfering with a surgeon is a particularly difficult problem. This project will require a student to design an active viewpoint selection algorithm that controls a robot to position a camera in order to maximise information acquisition and avoid interference.
Students on this project will be required to investigate and apply Bayesian optimisation and robot planning to solve the viewpoint selection problem, using a PR2 mobile robot to move a camera observing scenes in an eoSurgical simulator (see link above).
|Bonnie Webber||Semantic consistency and shallow discourse parsing for NLP||
Project involves assessing whether a discourse parser induced from a corpus performs better after the corpus has been made more consistent (i.e., where similar items have been annotated with similar labels).
Besides adding over 13K new discourse relations to the ~40600 relations in the Penn Discourse TreeBank v2 (PDTB-2), over 10K (approximately 25% of the original 40600 relations) were corrected, all with the goal of making the corpus more consistent.
At issue here is whether it can be shown that such an attempt to impose more consistency on the annotation has paid off in terms of less noise in the dataset and hence, a more accurate model that can be learned from the annotation.
More specifically, the goal of the project is to determine whether any benefit has accrued from applying semantic consistency checks before the corpus was released.
One way of demonstrating more consistency would be to use Ziheng Lin's end-to-end parser as a framework (code can be found via the link below ), retrain it based on a revised version of the PDTB-2, and then compare the performance of the retrained parser against Lin's original version. It would also be useful to try to characterize reasons for any differences in performance.
|Maria Wolters||Missing Data in eHealth||I am interested in the reasons why people fail to wear their FitBits, log into WeRun, or track their food sometimes, and in statistical models that can be used to help algorithms reason with these processes.|
Participants will stay in four- or five-bedroom flats in a University building near the Informatics Forum. These flats are self-catered, you are responsible for your own meals. The cost of accommondation will be approximately £900 for nine weeks (July 5th through September 9th). Students should not attempt to book these rooms through the University, we will take care of that and let you know the exact cost.
The University charges bench fees of £800 for the programme, which gives you access to our world class computing and research facilities.
How to let us know you are interested
In the first instance, send a short email giving a ranked list of at least two, preferably three, projects, to:
Please use "Peking/Edinburgh summer research visit" as the subject, and attach a short CV, being sure to list any courses/projects/employment relevant to your choices.