Director of Research, Dr Scott A. Hale, examines how machine learning can help fact-checkers decide what to fact-check, perform fact-checks more efficiently, and disseminate fact-checks effectively.
Creating high-quality fact-checks is labour-intensive and time-consuming. Computational methods, workflow software, and machine learning can free fact-checkers to focus on the important aspects. Bots can handle interaction on tiplines and machine learning can detect and match claims to help fact-checkers decide what content to prioritize. Similarly, algorithms to search existing fact-checks, locate relevant articles in knowledge bases, and evaluate existing content (stance detection) can help in the process of performing a fact-check.
Dr Hale will also discuss bigger picture aspects such as the differences between political fact-checking and health fact-checking, user-interface design choices, social factors, and academic–practitioner collaboration.
This talk is hosted by the Institute for the Next Generation of Journalism and Media at Waseda University and FactCheck Initiative Japan.The talk will be in English with simultaneous translation to Japanese. Although the registration form is in Japanese, you will be able to choose the language you wish to listen in.
Dr Scott A. Hale is Director of Research at Meedan, a non-profit building digital tools for global journalism and translation. He sets strategy and oversees research on widening access to quality information online and seeks to foster greater academic–industry collaboration through chairing multistakeholder groups, developing and releasing real-world datasets, and connecting academic and industry organizations. Scott is also a member of the Credibility Coalition, an Associate Professor and Senior Research Fellow at the Oxford Internet Institute, University of Oxford, and a Fellow at the Alan Turing Institute.