Abstract
We build a natural connection between the learning problem, co-training, and forecast elicitation without verification (related to peer-prediction) and address them simultaneously using the same information theoretic approach.
In co-training/multiview learning the goal is to aggregate two views of data into a prediction for a latent label. In this talk, I will show how to optimally combine two views of data by reducing the problem to an optimization problem. This method gives a unified and rigorous approach to the general setting.
In forecast elicitation without verification we seek to design a mechanism that elicits high quality forecasts from agents in the setting where the mechanism does not have access to the ground truth. By assuming the agents’ information is independent conditioning on the outcome, I will show the design of the mechanisms where truth-telling is a strict equilibrium for both the single-task and multi-task settings. The multi-task mechanism additionally has the property that the truth-telling equilibrium pays better than any other strategy profile and strictly better than any other “non-permutation” strategy profile.
Furthermore, I will also show how to apply the above method to the learning from crowds problem and present empirical results that show that this method achieves the new state-of-the-art results in most learning from crowds settings and is a very early algorithm that is robust to various information structures among the crowds.
Time
2018-10-11 14:00 ~ 15:00Speaker
Yuqing Kong, Peking UniversityRoom
Room 602, School of Information Management & Engineering, Shanghai University of Finance & Economics