Ever wondered how to interpret your machine learning models? We explain a powerful interpretability technique for machine learning models: Shapley Values. They can be used to explain any model. We show a simple example code of how they work, and then explain the theory behind them.
AssemblyAI (Sponsor) https://www.assemblyai.com/research/u...
AI Coffee Break Merch! https://aicoffeebreak.creatorspring....
Thanks to our Patrons who support us in Tier 2, 3, 4:
Dres. Trost GbR, Siltax, Vignesh Valliappan, Michael, Sunny Dhiana, Andy Ma
Outline:
00:00 Interpretability in AI
01:02 AssemblyAI (Sponsor)
02:23 Simple example
03:51 Code example: SHAP
05:17 Shapley Values explained
07:59 Shortcomings of Shapley Values
Demo for SHAP on LLaMA 2 LLM: https://drive.google.com/drive/folder...
Keep in mind that you need to have the resources to run LLaMA 2. If not, try out the “gpt2” model in the code. You can find simple examples here: https://shap.readthedocs.io/en/latest/ (see e.g., “Text examples”)
“Interpretable Machine Learning” by C. Molnar: https://christophm.github.io/interpre...
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
Optionally, pay us a coffee to help with our Coffee Bean production! ☕
Patreon: / aicoffeebreak
Kofi: https://kofi.com/aicoffeebreak
Join this channel to get access to perks:
/ @aicoffeebreak
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
Links:
AICoffeeBreakQuiz: / aicoffeebreak
Twitter: / aicoffeebreak
Reddit: / aicoffeebreak
YouTube: / aicoffeebreak
#AICoffeeBreak #MsCoffeeBean #MachineLearning #AI #research
Video editing: Nils Trost