Yavuz Faruk Bakman

Yavuz Faruk Bakman

PhD Student in Computer Science

University of Southern California

Biography

Welcome to my website! I’m a third-year PhD student in Computer Science, studying under Professor Salman Avestimehr. Before starting my PhD, I worked at Hyperbee.ai as Research Engineer, focusing on making neural networks more efficient through compression and quantization. I got my Bachelor’s degree in Computer Science from Bilkent University, where I also researched machine learning security, specifically Trojan Attacks, with Professor Tudor Dumitras.

Currently, I’m interested in making Large Language Models more trustworthy and accurate. I’m exploring uncertainty of LLMs, how LLMs learn/store factual knowledge, how to inject new knowledge to LLM reliably. Furthermore, I have been doing research on Continual Learning, Self-supervised Contrastive Learning and Federated Learning. I have publications at top-ML conferences including ICLR, ACL and ECCV.

Outside of my research, I love playing video games, especially the Soulsborne series:

“Facing a challenge? Keep Calm and Git Gud.”

Interests
  • Trusthworty LLM
  • LLM Interpretability
  • Factuality of LLMs
  • Continual Learning
  • Federated Learning
  • Contrastive Learning
Education
  • PhD in Computer Science, Present

    University of Southern California, CA, US

  • BSc in Computer Science, 2022

    Bilkent University, Turkey

Recent News

  • 01-07-2024: Our paper “CroMo-Mixup: Augmenting Cross-Model Representations for Continual Self-Supervised Learning” got accepted to ECCV 2024 ! See you in Milano!
  • 28-06-2024: “Do LLMs Recognize me, When I is not me: Assessment of LLMs Understanding of Turkish Indexical Pronouns in Indexical Shift Contexts” got accepted to ACL Turkic Languages 2024 Workshop .
  • 16-06-2024: New paper “Do Not Design, Learn: A Trainable Scoring Function for Uncertainty Estimation in Generative LLMs” in collaboration with Amazon AI posted to Arxiv!
  • 16-05-2024: Our paper, “MARS: Meaning-Aware Response Scoring for Uncertainty Estimation in Generative LLMs” co-authored with Amazon AI, has been accepted to the ACL 2024.
  • 01-06-2024: Thrilled to announce that my first-author paper, “Federated Orthogonal Training: Mitigating Global Catastrophic Forgetting in Continual Federated Learning,” has been accepted at ICLR 2024 . Looking forward to presenting our findings in Vienna!
  • 10-12-2023: I gave a talk at Amazon-USC Center on Secure and Trusted Machine Learning about Advancing Continual & Federated Learning with Self & Mixed Supervision.
  • 02-14-2023: Excited to share that my first paper, “Federated Alternate Training,” has been accepted at ISBI 2023.
  • 08-24-2022: Began my journey towards a PhD in Computer Science at University of Southern California University of Southern California (USC).
  • 06-15-2022: Graduated with the highest honors from Bilkent University, majoring in Computer Science.
  • 05-15-2022: Made the decision to join USC for my PhD studies under the guidance of Salman Avestimehr.
  • 04-14-2022: Honored to have received PhD offers from several prestigious institutions: Princeton , Cornell, USC, UCSB, UCSD, Wisconsin-Madison, and Northeastern.

Experience

 
 
 
 
 
Research Assistant
September 2022 – Present California
Currently working on trustworthy LLM and how to keep LLMs up to date in terms of factual knowledge.
 
 
 
 
 
Research Engineer
June 2019 – September 2022 California
Worked on accelerating and compressing neural networks for various computer vision tasks including. Mostly focused on the quantization aspect.
 
 
 
 
 
Research Intern
June 2021 – December 2021 Maryland
Attended TrojAI competetion in UMD - UC Berkeley Team supervised by Tudor Dumitras. Developed novel methods using loss surface, layer similarity and trojan transferability to detect backdoored models.

Recent Publications

Quickly discover relevant content by filtering publications.
(2024). CroMo-Mixup: Augmenting Cross-Model Representations for Continual Self-Supervised Learning. ECCV 2024.

PDF Cite Code

(2024). Do Not Design, Learn: A Trainable Scoring Function for Uncertainty Estimation in Generative LLMs. preprint.

PDF Cite

(2024). MARS: Meaning-Aware Response Scoring for Uncertainty Estimation in Generative LLMs. ACL 2024.

PDF Cite Code

(2024). Federated Orthogonal Training: Mitigating Global Catastrophic Forgetting in Continual Federated Learning. ICLR 2024.

PDF Cite Code

Contact