Mirah Shi
Email: mirahshi at seas.upenn.edu
Hello! I'm a third-year PhD student in computer science at the University of Pennsylvania,
where I'm fortunate to be advised by Michael Kearns
and Aaron Roth.
My main interests lie in theoretical machine learning, algorithmic game theory, and online learning.
I am generally interested in algorithmic problems related to reliable and responsible decision-making.
My research is supported by AWS AI.
Before coming to Penn, I received my bachelors in math from Barnard College.
I pronounce my name my-ra shee.
-
Sample Efficient Omniprediction and Downstream Swap Regret for Non-Linear Losses
Jiuyao Lu, Aaron Roth, Mirah Shi
[arXiv]
Preprint
Recent literatures in omniprediction and downstream (swap) regret study how to design predictions to optimize many losses at once. This paper introduces a unified framework: we show how to minimize "decision swap regret," which generalizes both omniprediction and downstream swap regret. Our results extend/improve upon both literatures.
-
Algorithmic Aspects of Strategic Trading
Michael Kearns, Mirah Shi
[arXiv]
Preprint
What happens when traders---who each want to acquire some position in a stock---interact strategically? This paper investigates equilibria computation in a trading game.
-
An Elementary Predictor Obtaining \(2\sqrt{T}+1\) Distance to Calibration
Eshwar Ram Arunachaleswaran, Natalie Collina, Aaron Roth, Mirah Shi
[arXiv]
[slides]
SODA 2025 (Also accepted to the NeurIPS 2024 Optimization for Machine Learning Workshop)
We give an extremely simple, efficient, deterministic online algorithm that achieves low distance to calibration, answering an open question of Qiao and Zheng (2024).
-
Forecasting for Swap Regret for All Downstream Agents
Aaron Roth, Mirah Shi
[arXiv]
[slides]
[poster]
EC 2024 (Also accepted to the ESIF Economics and AI+ML Meeting 2024)
How can we make forecasts that are simultaneously valuable (guarantee low swap regret) to any downstream agent? Calibration is one answer, but it suffers from poor convergence rates. Our techniques circumvent calibration to achieve low downstream swap regret at drastically improved rates.
-
Center-Embedding and Constituency in the Brain and a New Characterization of Context-Free Languages
Daniel Mitropolsky, Adiba Ejaz, Mirah Shi, Christos Papadimitriou, Mihalis Yannakakis
[arXiv] Natural Logic Meets Machine Learning Workshop (NALOMA) 2022
Proposes a biologically plausible implementation of a language parser that can handle recursion (i.e. embedded sentences) and generate constituency representations.
Teaching
I've been a teaching assistant for the following courses at Penn:
- NETS 4120 Algorithmic Game Theory (Spring 2024)
- CIS 6250 Theory of Machine Learning (Fall 2022)
and at Columbia:
- COMS 3261 Computer Science Theory (Spring 2021)
Other
- I co-organize the Theory Seminar at Penn. Feel free to reach out if you'd like to give a talk!
- I spent the summer of 2021 doing research at Pacific Northwest National Laboratory, hosted by Sinan Aksoy.