• Interpreting Results of Machine Learning Models - Nave Frost


  • Abstract

    The interpretability of complex Machine Learning models is coming to be a critical social concern, as they are increasingly used in human-related decision-making processes such as resume filtering or loan applications. Individuals receiving an undesired classification are likely to call for an explanation – preferably one that specifies what they should do in order to alter that decision when they reapply in the future. Existing work focuses on a single ML model and a single point in time, whereas in practice, both models and data evolve over time: an explanation for an application rejection in 2018 may be irrelevant in 2019 since in the meantime both the model and the applicant’s data can change. To this end, we propose a novel framework that provides users with insights and plans for changing their classification in particular future time points. The solution is based on combining state-of-the-art algorithms for (single) model explanations, ones for predicting future models, and database-style querying of the obtained explanations.

    Bio
    Ph.D. student in the Databases group of Tel Aviv University, advised by Professor Daniel Deutch. Received a Bachelor’s degree from Ben Gurion University, summa cum laude, and Master’s degree from Tel Aviv University, both in computer science. Research area focuses on supplying explanations for machine learning and data science applications, allowing to assess their trustworthiness and detect errors. Recipient of VLDB Best Paper Award, 2017, and SIGMOD Research Highlight Award, 2018 for the work on Provenance for Natural Language Queries.