Skip to content
View 7vik's full-sized avatar

Organizations

@bitsacm

Block or report 7vik

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
7vik/README.md

Satvik Golechha

Independent Researcher | AGI Safety β€’ Interpretability β€’ Reinforcement Learning
πŸ“ UC Berkeley (CHAI), MATS Program
πŸ“§ [email protected] | 🌐 7vik.io | Scholar | LinkedIn | GitHub


On a quest to understand intelligence and ensure that advanced AGI is safe and beneficial.

🧠 About Me

I’m an independent AI safety researcher currently working with:

  • CHAI, UC Berkeley β€” Optimal exploration and long-horizon planning in RL.
  • AdriΓ  Garriga-Alonso (FAR AI) β€” Studying deceptive behavior in frontier AI systems at the MATS Program.
  • Nandi Schoots (Oxford) β€” Hierarchical representations and modular training for interpretability.

Previously:

  • Microsoft Research β€” Worked with Neeraj Kayal on representation learning theory, and Amit Sharma and Amit Deshpande on ICL robustness in LLMs.
  • Wadhwani AI β€” Formulated AI problems in public health and trained robust and interpretable ML large-scale deployments in India.
  • Mentored a SPAR 2025 project on zero-knowledge auditing for undesired behaviors.

πŸ”¬ Research Interests

  • AI Alignment & Safety
  • Interpretability & Feature Geometry
  • Long-horizon RL & Planning
  • Representation Learning & Theory

πŸ“° Selected Publications

=Equal contribution; full list at Google Scholar


πŸ§ͺ Notable Projects

  • AmongUs – Agentic deception sandbox
  • nice-icl – ICL optimization tools
  • grokking – Measuring grokking dynamics
  • byoeb – Healthcare LLM deployment platform


βœ‰οΈ Get in Touch

  • Email: [email protected]
  • Website: 7vik.io
  • LinkedIn: @7vik
  • Open to collaborations in interpretability, alignment, deception audits, and theoretical ML.

Pinned Loading

  1. mats_2024 mats_2024 Public

    All the code for MATS in Summer 2024.

    Jupyter Notebook

  2. microsoft/nice-icl microsoft/nice-icl Public

    NICE: Normalized Invariance to Choice of Example

    Python 5 3

  3. bitsacm/Slack-Stock-DAG bitsacm/Slack-Stock-DAG Public

    This repository holds a list of cool resources for Silica.

    97 42

  4. erplag-cc erplag-cc Public

    Compiler for the custom language 'ERPLAG' in C.

    C 6 3

  5. AmongUs AmongUs Public

    Make open-weight LLM agents play the game "Among Us", and study how the models learn and express lying and deception in the game.

    Jupyter Notebook 11