Hello, I'm

Garvish Bhutani

Low Power AI Architecture Intern @ Qualcomm

AspiringRobotics Engineer
View Resume
scroll

About Me

I'm a 3rd-year Engineering Science student at the University of Toronto, majoring in Robotics with a Minor in Machine Intelligence (CGPA 3.61). I build systems that sit at the intersection of hardware and software — from autonomous navigation stacks for robots to AI hardware architecture on Snapdragon chips.

Currently interning at Qualcomm, I work on low-power AI hardware integration, validation flows, and performance profiling for next-generation Snapdragon platforms. Outside of work, I lead drone and rover projects with the Robotics for Space Exploration (RSX) design team.

I'm passionate about robotics, embedded AI, and building things that move and think in the real world.

Education

University of Toronto

Engineering Science — Robotics Major, Machine Intelligence Minor

CGPA 3.61 · Dean's Honour List · Sep 2022 – May 2027

Awards

Best Abstract Overall — UofT UnERD Conference (140 candidates)
Dean's Undergraduate Summer Research Fellowship — $8,000
Ian and Shirley Rowe Award for Innovation & Community Impact — $4,000

Robotics & Autonomy

ROSGazeboArduPilotSLAMPath PlanningOpenCVOpen3D

Programming

PythonC/C++TypeScriptMATLABVerilog

AI / ML

PyTorchDeep LearningNumPySciPyOpenAI API

Hardware

FPGANvidia JetsonLiDARFlight ControllersIMU/GPSArduinoESP-32

Tools & Infra

DockerUbuntu/LinuxGitGoogle CloudCAD (Fusion, SolidWorks)

Experience

Low Power AI Hardware Architecture Intern

Qualcomm · Toronto, ON

IndustryMay 2025Present
  • System integration for audio, sensor, and AI hardware on Snapdragon chips across mobile, laptop, wearables, and automotive platforms.
  • Developed validation and graphing flows to detect defects in the Network on Chip (NoC) bus interface prior to silicon design and implementation.
  • Experience with bandwidth/latency tests and NoC design for bus protocols including AMBA AXI.
  • Streamlined performance analysis flows for DSP and NPU using Transaction Level Modeling.
  • Testing and profiling hardware prototypes to prove feasibility and collect performance data for cutting-edge features.
AMBA AXINoCDSPNPUTLMSnapdragonPython

Drone Lead — Heavy Duty Drone Survey Mission

Robotics for Space Exploration (RSX), UofT · Toronto, ON

LeadershipJuly 2025Present
  • Developed a 17-inch quadcopter capable of carrying a 1.1 kg payload with up to 20 minutes of flight time in windy conditions.
  • Tuned PID controllers and validated ArduPilot-based position and altitude hold for stable, reliable flight performance.
  • Led the design and implementation of a gripper system capable of picking up objects weighing up to 1 kg.
  • Managed full-system integration, assembly, and iterative design improvements to enhance reliability and maintainability.
ArduPilotPID ControlFlight ControllersCADFusion 360

Robotics Researcher — Sidewalk Navigation

Robot Vision and Learning (RVL) Lab, UofT · Toronto, ON

ResearchMay 2024Jan 2025
  • Presented at the 2025 IEEE ICRA Workshop on Field Robotics.
  • Aimed to create a fully autonomous navigation stack that safely interacts with pedestrians on sidewalks.
  • Integrated a model predictive control local planner (SICNav) with Google Cartographer for localization, an A* global planner, and a pedestrian detection (PiFeNet) and tracking system using a Clearpath Jackal robot and Ouster LiDAR.
  • Developed a novel implementation of Cartographer with a 3D mapping model and 2D motion model, achieving a localization error of 14 cm from ground truth.
ROSC++PythonOuster LiDAROpen3DUbuntu 20.04Google Cartographer

Software and Autonomy Lead — Mars Rover Navigation

Robotics for Space Exploration (RSX), UofT · Toronto, ON

LeadershipSept 2022July 2025
  • Developed autonomous navigation to traverse to waypoints for the University Rover Challenge on outdoor Mars-like terrain using Python and C++ in ROS.
  • Implemented in-house obstacle avoidance algorithms and off-the-shelf ROS packages with LiDAR, stereo cameras, IMU, and GPS.
  • Coded the manual controls of the rover to be more intuitive while giving greater flexibility to the driver.
  • Set up the communications system enabling rover control at a range greater than 1 km.
  • Developed workshops for training recruits in Git concepts, dual-booting Linux on Windows, and robotics fundamentals.
ROSPythonC++GazeboDockerUbuntuOpenCVVelodyne LiDAR

Projects

A selection of research, engineering, and software projects spanning robotics, AI/ML, and hardware design.

📷 sidewalk-nav-placeholder.jpg
RoboticsAI/ML

Autonomous Sidewalk Navigation

Integrated a model predictive control local planner (SICNav) with Google Cartographer, an A* global planner, and PiFeNet pedestrian detection on a Clearpath Jackal robot with Ouster LiDAR. Achieved 14 cm localization error from ground truth. Presented at IEEE ICRA 2025 Workshop on Field Robotics.

  • 14 cm localization error from ground truth
  • Presented at IEEE ICRA 2025
  • Novel 3D mapping + 2D motion model for Cartographer
ROSC++PythonOuster LiDARGoogle Cartographer+2
📷 mars-rover-placeholder.jpg
RoboticsSoftware

Mars Rover Autonomous Navigation

Built autonomous waypoint navigation for the URC using ROS, Python, and C++ with Velodyne LiDAR, ZED/Realsense cameras, IMU, and GPS. Implemented obstacle avoidance, communications over 1 km, Gazebo simulation, and light beacon detection with OpenCV.

  • 1+ km communication range
  • Full Gazebo simulation environment
  • IR beacon detection with OpenCV
ROSPythonC++GazeboVelodyne LiDAR+2
📷 drone-placeholder.jpg
RoboticsHardware

Heavy Duty Survey Drone

Designed, built, and tuned a heavy-lift quadcopter for survey missions. Implemented ArduPilot-based position and altitude hold with tuned PID controllers. Designed a gripper system capable of picking up 1 kg objects. Led full-system integration and iterative hardware improvements.

  • 1.1 kg payload capacity
  • 20-minute flight time in wind
  • 1 kg gripper system
ArduPilotPID ControlFlight ControllerCADFusion 360
📷 alzheimers-placeholder.jpg
AI/ML

Alzheimer's Detection via EEG

Trained a convolutional neural network in PyTorch on electroencephalogram (EEG) data to classify Alzheimer's patients. Evaluated over 50 model architectures to select the final model. Achieved 100% recall, 77% precision, and 87% F1 score on unseen test data.

  • 100% recall on test data
  • 87% F1 score
  • 50+ model architectures evaluated
PyTorchPythonDeep LearningEEGNumPy+1
📷 medilens-placeholder.jpg
HardwareSoftwareAI/ML

MediLens: Biometric AR Goggles

Built AR goggles on sunglasses with noise-cancelling earbuds and a heart rate sensor. Uses AssemblyAI for real-time transcription/translation into on-screen captions. Designed for individuals with ASD or deafness to reduce sensory overload while maintaining communication. Integrates Google Cloud, OpenAI API, and Meta AI.

  • Real-time speech-to-caption
  • Heart rate biometric integration
  • Designed for accessibility
PythonC++ESP-32C3ArduinoOpenAI API+3
📷 food2fuel-placeholder.jpg
Software

Food2Fuel: Campus Organic Waste System

Top 10 finish among 42 teams. Proposed on-site anaerobic digestors to convert campus organic waste into biogas and electricity. The solution diverts 89,400 tonnes of organic matter from landfills over its lifetime, reducing 35,800 tonnes of CO₂ at $100/tonne by 2050.

  • Top 10 of 42 teams
  • 89,400 tonnes of organic waste diverted
  • 35,800 tonnes CO₂ reduction
Systems DesignSustainabilityAnaerobic Digestion

Resume

A snapshot of my education, experience, and skills. Click below to view or download the full PDF.

3.61

CGPA

2+

Years Research

1

IEEE Publication

$12K+

Awards Won

Get in Touch

Open to internships, research collaborations, and full-time opportunities in robotics, AI hardware, and autonomy. Let's connect.

Direct Contact

Location

Toronto, ON

Social Links

Response Time

I typically respond within 24–48 hours. For urgent matters, email directly.