Hi! I am Abhishek.
A roboticist
WhoI’m
Know My Story
I am Abhishek Kathpal, CTO and Co-Founder at Inception Robotics. With a Master of Engineering in Robotics from the University of Maryland, College Park, I have honed my skills in the field of robotics, particularly in areas of computer vision and motion planning. These areas are essential for robots to navigate safely and effectively in unknown environments.
My professional journey has led me through various roles, from being a Computer Vision Intern at Airgility, where I worked on perception algorithms for autonomous flight packages using deep learning and deployed them on embedded platforms, to a Software Development Engineer II at Amazon Web Services.
At Inception Robotics, I am playing a pivotal role in securing startup capital through givernment grants and have developed a safe and socially-compliant navigation software stack for indoor public spaces. I’m currently spearheading the design and development of a specialized cloud platform for robotic applications.
Expertise
My Expertise
Computer Vision
Deep Learning
Robot Operating System
Motion Planning
Python
OpenCV
TensorFlow
MATLAB
C++
Embedded Systems
Education
University of Maryland, College Park
Master of Engineering, Robotics
GPA – 3.84
Jan 2018 – Dec 2019
Udacity
Robotics Software Engineer Nanodegree
2018-2019
National Institute of Technology, Kurukshetra
Bachelor of Technology, ECE
CGPA – 7.92/10.0
July 2012- July 2016
En route
My Journey
CTO & Co-Founder, Inception Robotics
Jul 2022 – Current
Developed navigation software stack prioritizing safe and socially compliant behaviors for indoor public spaces using deep reinforcement learning and classical approaches. Spearheading the design and development of a specialized cloud platform for robotic applications.
Software Development Engineer II
March 2020 – July 2022
Contributed to automation efforts that significantly reduced the time required for new region builds in AWS CloudFormation.
Computer Vision Intern
July 2016 – March 2017
Worked on the perception algorithms for autonomous flight package for drones using deep learning and deploying them on an embedded platform.
My Projects
Computer Vision
Buildings Built-in minutes:
An SfM Approach
FaceSwap
This project focuses on estimating a three dimensional structure from two-dimensional image sequences which are related to each other by change in camera motion (Orientation and translation). This problem is usually referred to as Structure from motion. There are several algorithms which achieve this. In this project, we aim to learn how to recreate 3D structures from a given dataset of 2D images using traditional approaches to Sfm.
This project is focused on implementation of Face Swapping algorithm using both traditional and deep learning approach. The pipeline for the algorithm consists of detection of facial landmarks, inverse warping, blending and motion filtering. Facial landmarks are detected using the dlib library. Delaunay Triangulation and Thin Plate spline are used for inverse warping. To detect the feature points more accurately, 3D face mesh is used in deep learning technique.
Follow Me
3D Perception
In this project, a deep neural network was trained to identify and track a target in simulation. So-called “follow me” applications like this are key to many fields of robotics and the very same techniques you apply here could be extended to scenarios like advanced cruise control in autonomous vehicles or human-robot collaboration in industry.
This project is focused on the implementation of Perception Pipeline for Tabletop object segmentation and detection using Point Cloud Library and ROS. The pipeline for the algorithm consists of noise removal from RGBD data, filtering and extracting a region of interest, extracting features and implementing SVM. RGBD data is collected from PR2 Robot Camera.
Motion Planning & Navigation
Agile Robotics for Industrial Automation
The basic goal of the ARIAC competition is to build the kits using UR5 Manipulator and fulfill the order. The first task is to break down the entire task into sub-tasks. The qualifiers provided were the perfect stepping stones for the same. The highest-level task is fulfilling any kind of order that is announced. It was broken down into subtasks like starting the competition, parsing the order, processing the order, and ending the competition.
MapMyWorld
In this project, the task is to implement a Graph SLAM based approach to create a 2D occupancy grid and 3D octomap of a given kitchen and dining environment and also a custom gazebo environment. The task is accomplished by a two-wheeled robot with an RGB-D camera and hokuyo LIDAR sensor. Real-Time Appearance-based mapping approach is implemented using rtab\_ros package. The final 3D octomaps for both the environments are generated by performing teleoperation via keyboard and real-time applications of such maps are discussed. A custom ROS package is written to complete the above-defined task.
Autonomous Frontier Based Exploration
Exploration is the process of selecting target points that yield the biggest contribution to a specific gain function at an initially unknown environment. Frontier-based exploration is the most common approach to exploration, wherein frontiers are regions on the boundary between open space and unexplored space. In this project, an autonomous frontier-based exploration strategy, namely Wavefront Frontier Detector (WFD) is described and implemented on Gazebo Simulation Environment as well as on Turtlebot.
Deep RL Arm Manipulation
This project is based on the Nvidia open source project “jetson-reinforcement” developed by Dustin Franklin. The goal of the project is to create a DQN agent and define reward functions to teach a robotic arm to carry out two primary objectives:
- Have any part of the robot arm touch the object of interest, with at least 90% accuracy.
- Have only the gripper base of the robot arm touch the object, with at least 80% accuracy
Curious?
————————–
Contact me:
kathpal.abhishek@gmail.com
Let's Connect
To join hands in the process of making a better world!