Orlando Habet

Robot Operating System Engineer  |  Embedded Linux Engineer
National Taiwan Ocean University  |  MD of Computer Science and Engineering

  • Qualifications: ROS Developer for 3 years

  [email protected] 

  +886 975221693

  Taichung, Taiwan 

  Holder of Taiwan legal residence certificate, no work permit is required 


Programming Language

  • C
  • C++
  • Python
  • XML
  • HTML
  • PHP


  • RVIZ
  • Git
  • rqt_topic
  • rqt_image_viewer
  • smach_viewer
  • rqt_tf_tree
  • rqt_reconfigure
  • RealSense Viewer
  • Gazebo
  • Visual Studio Code

Development Environment

  • Robot Operating System(ROS)
  • Linux OS
  • Embedded Linux
  • Jetson Development Boards
  • Arduino Development Boards


  • English — Native
  • Chinese — Intermediate
    (Daily and work communication)

Project Management

  • Scrum
  • Team Communication

Work Experience


Industrial Technology Research Institute

December 2018 - September 2021 (estimated 3 years experience)

ITRI is a world-leading applied technology research institute with more than 6,000 outstanding employees. Its mission is to drive industrial development, create economic value, and enhance social well-being through technology R&D. 

Upon entering ITRI, I was assigned to the Sensing Information Technology laboratory to work in robotics.

Simulation Testing •  2018/12 - 2019/03 

Because I did not have any experience with ROS, the first challenge for me was to become familiar with it. I started from learning the basic concepts of ROS, installing ROS, creating and preparing simulation environments, loading pre-built robot models, running SLAM and giving navigational goals while having the robot create a map of the environment. After 3 months, I was able to work independently.  I provided multiple simulation environments which could be used for testing SLAM algorithms, testing exploration algorithms and tuning navigation. The environments were based on real life blueprints of homes. I also provided multiple launch files, stored in ROS packages, for running tests with regard to SLAM and navigation.

  • Skill summary
    1. Python
    2. XML
    3. Installing Ubuntu and ROS
    4. Installation of readily available ROS packages
    5. Simulation environment preparation and creation in Gazebo
    6. Running pre-built robot models such as Turtlebot 2 and 3
    7. Installing and running SLAM and exploration algorithms

Robot Application •  2019/07 - 2021/09 

My team leader introduced me to SMACH, which is a python library used for creating complex robot behavior, and assigned me to maintain our system's SMACH source code. Although the code was quite lengthy, I was able to understand and make adjustments whenever an issue arose. Since I began working on the robot application, I also had to cooperate with the testing team for System Integration Testing (SIT). I would discuss with them what the normal outcome for every case should be, and also teach them how to use debug tools to help log the necessary information for tracking down the root causes of the issues. Every time a bug was found, my leader trusted me and assigned me to resolve the issues. In most cases, I was able to locate the root cause of the issue and fix it.

I later became in charge of integrating the modules of other team members into the robot's system. This included modules such as fall detection, voice detection and sleep analysis. I would need to determine the appropriate time for these modules to be started and closed such that the system resources would not be wasted.

At the time, our robot did not have a stable odometry source and we were using GMapping as its SLAM algorithm. Our leader was very interested in trying out Google's Cartographer because it was known for working with the use of scan matching, even without a stable odometry source. He did mention his intent to test it on his own, but I noticed that he was quite busy with other matters; as a result, I offered my assistance. He had enough confidence in me to hand this task over. I  was completely able to install, run and integrate Google's Cartographer into our robot system. This helped to improve and stabilize our robot's localization and also reduce some loading from my leader.

Eventually, we decided to give our robot an upgrade because newer and better hardware products were being released and there were also newer versions of ROS being made available. My leader assigned me to test our software on the newer Jetson development boards with the newest version of ROS installed and upgrade the robot based on the results. The purpose of this was to increase the overall performance of the system and allow us to add more programs, since the newer boards had increased the allowable CPU and GPU capacity. I became knowledgeable of Nvidia JetPack SDK installation, sensor calibration and how to assemble and disassemble the robot's hardware components. During this upgrade procedure, I also had plenty of practice with writing installation documents, which saved so much time for future installations. Through documentation, my knowledge became the team's knowledge, which aided in our overall work efficiency.

There were cases where some of our senior engineers would need help in ROS related matters. My leader would assign me to provide assistance and aid in resolving their issues. This would include determining the best solutions to problems, finding root causes of bugs and how to test their code on the robot application to make sure it works properly. At this point, anyone who wanted to borrow the robot for development or testing purposes would need to see me first to make arrangements for it.

Eventually, after much testing, I realized the shortcomings of Google's Cartographer and how it was unable to meet our standards for localization; as a result, I did a quick survey to find an algorithm that was best suited to our requirements. One that did not rely on laser scans to determine location, but image feature points. I suggested it to my leader and after receiving his green light, I took the necessary steps for installation and integration, and we saw an immediate improvement in localization and its stability.

During the years, some of my team members were transferred to different labs; as a result, my team leader would assign me to carry on their work. I felt really satisfied knowing that he had the confidence in me to be able to handle and maintain source code from others. Of course, whenever a requirement arose, I was more than able to adjust the source code to meet it.

My team leader and I would discuss and design algorithms with each other. After having a clear picture of what the algorithm should look like, he would assign me to implement it. I would do the coding and testing of the algorithm on my own before presenting it to my leader.

  • Skill summary
    1. Python
    2. XML
    3. Working with SMACH (state machine)
    4. Working with Visual Studio Code
    5. Version control (GitLab)
    6. Cooperation with testing team
    7. Assist team members to resolve issues
    8. Installation of readily available ROS packages
    9. Hardware and software installation and update
    10. Installation documentation for future reference and team knowledge
    11. Debugging and resolving issues (usage of debug tools such as RVIZ, rqt_topic, rqt_image_viewer, smach_viewer, rqt_tf_tree, rqt_reconfigure)
    12. Algorithm design and implementation


Precision Machinery Research and Development Center

April 2022 - Present 

Precision Machinery Research and Development Center (PMC) is a Taiwan-based company created in 1993 by the government and the Association of Machinery Industry (TAMI). PMC has teams of engineers from all fields related to machinery, including mechanical engineering, electrical engineering, control engineering, and information technologies. PMC conducts research and development in association with universities, and not only transfers newly developed technologies to Taiwanese machinery companies, but also helps train their engineers. This approach allows the Taiwanese machinery industry to be a leader in performance and quality. 

Upon entering PMC, I was placed in the Robot development department to work in creating automated guided vehicles from the ground up.

Automated Guided Vehicle (AGV) •  2022/04 - Present 

The current AGV systems built in my department have no support on ROS. My job is to re-design and implement a system running on ROS. Since the hardware components were not available to me at the start, the first thing I did was to prepare the ROS packages and test them in simulation. I decided to simulate Turtlebot 3 (differential drive), with GMapping as the SLAM algorithm, GMCL as the localization algorithm and teb local planner as the local planner for navigation.

In order to have a ROS system for the AGV, I must make sure that the sensors and other hardware are supported on ROS. For the LiDAR, I have tested the ROS driver available as a debian package and there were no issues. I did a test which involved connecting two LiDAR's, since the AGV requires one at the front and one at the rear, and it all went smoothly. The next thing I needed was to be able to sends commands the driver to control the motors of the AGV. I cleaned up their original motor control code and only kept what was necessary and then I made a ROS wrapper for that controller. This way, when we send twist messages on ROS, the drivers can receive the commands and move the motor. Next thing I needed to do was to get the encoder data from the drivers. I did this by connecting the encoder pins to an Arduino board. Every time the motor moved, a signal would be sent and then received by the Arduino. I uploaded a program to count the signals and publish them as ticks to a ROS topic. The reason I needed the ticks was to be able to calculate odometry, which is a local pose estimate of the location of the AGV. I made a node that would subscribe to the encoder ticks and then use them to calculate and publish the odometry as a ROS topic. 

  • Skill summary
    1. Python
    2. C++
    3. ROS simulation
    4. ROS programming
    5. Arduino development board
    6. SLAM
    7. Navigation

Projects - PECOLA: Personal Companion Robot For Older People Living Alone 

(12/2018 ~ 09/2021)

PECOLA is a robot that is able to identify and analyze the physical and mental states of seniors. The robot employs ambient intelligence technology in caring for its elderly companions, making sure they are in good health and spirits. It also compiles and sends information to the senior’s family members to help spark topics for discussion and bolster communication between them. PECOLA uses image recognition to record changes in food portion and carry out diet analysis to understand the individual’s food intake. PECOLA also uses WiFi signals in detecting one’s breathing rate during sleep. What’s more, it utilizes deep learning technology to detect fall incidents. Once a fall is detected, the robot immediately calls the individual’s family members to initiate a video session for home safety.

- ITRI PECOLA Senior Citizen Companion Robot : shorturl.at/bcmLS  

- 工研院 PECOLA 樂齡陪伴機器人 : shorturl.at/hzF12 

- Personal Companion Robot for Older People Living Alone (PECOLA) : shorturl.at/mDEM7

Consumer Electronics Show (CES)

In 2019 our robot received a CES Innovation award and joined CES 2020, which was hosted at Las Vegas Convention Center.

CES 2020 Innovation Award : shorturl.at/uJNW4

- ITRI Wins Two CES 2020 Innovation Awards : shorturl.at/lzMNO

- CES 2020: ITRI’s Featured Innovations in Digital Health : shorturl.at/vOQ48

ITRI Exhibits Artificial Intelligence (AI) & Robotics and Digital Health Technology Innovations at CES 2020 : shorturl.at/zORUV

Robot Highlights


  • Depth camera
  • Microphone
  • 2D Lidar


  • Patrol service
  • Photographer service
  • Sleep analysis
  • Diet analysis


  • 3D mapping
  • Visual localization


  • 2D Navigation
  • Obstacle avoidance

Fall detection

  • Artificial intelligence
  • Server API call

Emergency help detection

  • Voice recognition
  • Server API call

SLAM and Navigation

PECOLA makes use of a RGB-D camera to map its environment along with the features in it. The SLAM approach is based on an incremental appearance-based loop closure detector. The loop closure detector uses a bag-of-words approach to determinate how likely a new image comes from a previous location or a new location. 

PECOLA's navigation system relies on its 2D Lidar to detect obstacles within its vicinity. The robot's path is then formulated based on the locations of the obstacles and the map created during SLAM. The trajectory is frequently updated based on the newest obstacle information received through the sensors. The robot's trajectory is optimized with respect to execution time and separation from obstacles. 


National Taiwan Ocean University

Master's Degree in Computer Science and Engineering  •  2015 - 2018

GPA : 3.4/4

Thesis : Design and Implementation of a Roll-Call Assistant System
Designed and implemented a quick, convenient and reliable method of taking the attendance of students in the classroom with the use of their mobile devices.

National Taiwan Ocean University

Bachelor's Degree in Computer Science and Engineering  •  2011 - 2015

GPA : 3.5/4

Orlando Habet (奧蘭多)

機器人作業系統開發工程師  |  Linux 嵌入式系統開發工程師
海洋大學  |  資訊工程碩士畢業

  • 資歷: ROS 系統開發約 3 年

  [email protected] 

  +886 975221693 

 台中, 台灣


技能 Skills


  • C
  • C++
  • Python
  • XML
  • HTML
  • PHP


  • RVIZ
  • Git
  • rqt_topic
  • rqt_image_viewer
  • smach_viewer
  • rqt_tf_tree
  • rqt_reconfigure
  • RealSense Viewer
  • Gazebo
  • Visual Studio Code


  • Robot Operating System(ROS)
  • Linux OS
  • Embedded Linux
  • Jetson Development Boards


  • 英文— 母語
  • 中文— 中,可日常及工作溝通


  • Scrum 敏捷
  • 跨團隊溝通

工作經驗 Work Experiences


工研院 Industrial Technology Research Institute

2018 年 12 月 - 2021 年 9 月 (約 3 年工作經驗)


進入工研院之後,我所分配到的實驗室是 Sensing Information Technology laboratory,此實驗室致力於機器人技術開發。

機器人模擬測試2018 年 12 月 - 2019 年 3 月 

起初因為沒有 ROS 開發相關經驗,我第一個面臨的挑戰是學習並熟悉它。我從基本 ROS 概念學起,包含安裝 ROS 系統、建置模擬測試環境、載入既有模型至機器人、運行 SLAM (Simultaneous localization and mapping) 並給出導航目標,同時讓機器人創建環境地圖。

3 個月後,我已可獨立開發,我建立了多個模擬環境,提供團隊用於測試不同 SLAM 演算法、環境探索算法以及調整機器人使用的導航。這些模擬環境是以現實生活中的住宅藍圖去產生的。同時我也提供了多個系統啟動的檔案,在 ROS 相關的 Packages中,讓團隊用於運行和 SLAM 及導航相關的測試。

  • 技能簡述
    1. 使用 Python 語言進行開發
    2. 使用 XML 撰寫機器人配置
    3. 熟悉 ROS 系統及 Ubuntu 系統
    4. 安裝一應俱全的 ROS Packages
    5. 使用 Gazebo 準備和創建模擬環境
    6. 運行既有機器人模型,例如 Turtlebot 2 和 3
    7. 安裝和運行 SLAM 和演算法研究

機器人開發2019 年 7 月 - 2021 年 9 月

我的團隊 Leader 向我介紹了一個基於 State Machine 機制,用於創建複雜機器人行為的 Python Library - SMACH,並指派我負責開發並維護我們系統的 SMACH 程式。雖然程式碼很複雜,但每當出現問題時,我都能理解並進行調整。自從我開始研究機器人應用程式以來,我也必須與系統整合測試(SIT)的測試團隊合作,我會與他們討論每個案例的正常結果應該是什麼,並教他們如何使用除錯工具來幫助記錄必要的信息,以便於追踪問題的根本原因。每當錯誤出現時,我的 Leader 會信任我並指派我解決問題,在大多數情況下,我能夠找到問題的根本原因並修復它。


當時,我們的機器人沒有穩定的 odometry source,我們決定使用第三方開源 - GMapping 作為它的 SLAM 算法,我也負責將它整併到我們的機器人。我的 Leader 對 Google 的 Cartographer 演算法非常感興趣,因為它以強大的掃描匹配而聞名,即使它沒有穩定的 odometry source,他提到了他打算自己測試這個演算法,但我注意到他時常忙於其他事情,因此,我主動提議協助他做測試,他對我有足夠的信心並把這個任務交付給我。我能夠從安裝、運行、測試 Cartographer 演算法並整合到我們的機器人系統中,且這有助於改善和穩定我們機器人的定位,同時我也減少 Leader 的一些負擔。

最終,由於有更新更好的硬體產品發布,而且 ROS 也有更新的版本釋出,我們決定將我們的機器人進行升級。我的 Leader 讓我在安裝了最新 ROS 版本的 Jetson 開發板上測試我們的軟體,並根據結果升級機器人,這樣做的目的是為了提高系統的整體效能,並讓我們能夠添加更多軟體,因為較新的板子增加了 CPU 和 GPU 容量。在此開發過程中,我也獲得了能力和知識,例如:安裝 Nvidia JetPack SDK、校準 Sensor 以及如何組裝和拆卸機器人的硬體元件。在整個升級過程中,我也大量練習撰寫安裝技術文件,有助於以後節省安裝的很多時間,通過文件撰寫,我的知識變成了團隊的知識,這有助於提高我們的整體工作效率。

在某些情況下,部分高等工程師在 ROS 相關問題上需要幫助,我的 Leader 會請我協助並提供幫助,讓他們能夠解決問題,這些問題包括提出錯誤的最佳解決方案、查找錯誤的根本原因以及如何在機器人應用程序上測試他們的程式以確保其正常運作。此時,任何想使用機器人進行開發或測試的人都需要先找我安排分配。

在我工作的後期,經過多次測試,我發現到 Google 的 Cartographer 的不足之處以及它如何無法滿足我們的定位標準,因此,我進行了一次快速調查並找出最適合我們需求的演算法,一種不依靠雷射掃描來確定位置,而是依靠圖像特徵點的方法,我向 Leader 提出了建議,在得到他的同意後,我執行了必要的安裝和整合的步驟,然後我們的機器人定位和其穩定性立即得到改善。

這些年來,我的部分團隊成員被轉調到不同的實驗室,因此,我的 Leader 會讓我交接、承接他們原本的工作,知道他對我有信心能夠處理和維護他人的程式碼,我感到非常滿意,當然,每當需求出現 時,我都能夠調整相對應的程式碼來達到它。Leader 和我會互相討論和設計演算法,在清楚地了解演算法的過程和理論後,他也會指派我來實現它,我會自己先做好演算法的開發的和測試,才會將完整演算法實作提交給 Leader。

  • 技能簡述
    1. 使用 Python 語言進行開發
    2. 使用 XML 撰寫機器人配置
    3. 使用 SMACH (State machine) 開發機器人行為
    4. 熟悉使用 Visual Studio Code 開發
    5. 版本控管 (GitLab)
    6. 和測試團隊合作
    7. 協助團隊成員解決問題
    8. 安裝一應俱全的 ROS Packages
    9. 安裝和更新機器人相關的硬體及軟體
    10. 技術文件撰寫
    11. 使用 Debug 工具除錯 (例如RVIZ、rqt_topic、rqt_image_viewer smach_viewer  rqt_tf_tree rqt_reconfigure)
    12. 設計和實作演算法


精密機械研究發展中心 Precision Machinery Research and Development Center

2022 年 4 月 - 至今 



加入 PMC 之後,我進入機器人研發部門,從零開始建造自動導航車輛。

自動導航車開發 (AGV) •  2022 年 4 月 - 至今

部門原本建置的 AGV 系統並沒有支援裝載在 ROS 系統,因此我負責重新設計且實作一個可在 ROS 執行的系統。由於一開始我沒有可用的硬體的配件,所以我所做的第一件事是準備好 ROS 的套件並且在模擬環境測試它。我決定模擬 Turtlebot 3(差動輪型機器人),使用 GMapping 作為 SLAM 的演算法、使用 GMCL 作為定位的演算法、使用 teb local planner 作為導航的本地規劃器。

為了讓 AGV 有一個 ROS 系統,我必須確保  sensors 和其他硬體都可支援 ROS,譬如 LiDAR,我已經測試了將 ROS 驅動封裝為 debian 軟件包使用,不會有任何問題。並且因為 AGV 需要兩個 LiDAR,一個在前面,另一個在後面,我也做了一個連接兩個 LiDAR 的測試,結果都很順利。
再來我必須能夠向 driver 發送命令來控制 AGV 的馬達,我重構了原來在控制馬達的程式碼,只保留了必要的部分,且我將這個 controller 封裝成一個 ROS 的 Wrapper,這樣,當我們在 ROS 上發送複雜的訊息時,driver 可以接收命令並移動馬達。
接著我需要做的是從 driver 獲取編碼器的資料來計算 AGV 的移動距離,我將編碼器引腳連接到 Arduino 開發板來實作這一點,每次馬達移動時都會發送一個信號,然後由 Arduino 板子接收,我寫了一個程式來計算信號並將它們作為刻度發佈到 ROS 的 Topic (類似 Tag),原因是為了透過刻度來計算里程,進而達到 AGV 的局部位置估計。最後透過創建了一個 Node 去訂閱編碼器的刻度,透過發佈和訂閱刻度這個機制來達到計算里程的功能。

  • 技能簡述 
    1. 使用 Python 語言進行開發
    2. 使用 C++ 語言進行開發
    3. ROS 模擬
    4. ROS 系統程式開發
    5. Arduino 開發版開發
    6. 安裝和運行 SLAM 和演算法研究
    7. 導航演算法研究

專案 Projects - PECOLA: 樂齡陪伴機器人

(12/2018 ~ 09/2021)

工研院打造「PECOLA樂齡陪伴機器人」,運用環境智能(Ambient Intelligence)技術關懷獨居長者吃得好、睡得好、身體好、心情好,並彙集資訊給子女,達到長輩與子女間找話題、建溝通等目的。PECOLA可透過人臉辨識來設定要關懷的對象,並能跟隨在旁作陪伴與協助,影像辨識技術能分析獨居老人的飲食菜色,並比對進食前後的餐點差異以歸納飲食概況。同時,為關心老人家晚間的睡眠生理狀況,PECOLA還可透過WiFi訊號來偵測睡眠期間的呼吸率。此外,長輩們居家最擔心會摔傷跌倒,PECOLA運用「深度學習技術」偵測跌倒事件,並主動撥號給子女建立雙向視訊,提升居家安全。

- ITRI PECOLA Senior Citizen Companion Robot: shorturl.at/bcmLS  

- 工研院 PECOLA 樂齡陪伴機器人: shorturl.at/hzF12

PECOLA樂齡陪伴機器人: shorturl.at/zBFJQ

消費電子展 (CES)
2019 年,我們的機器人獲得了 CES 創新獎,並參加了在拉斯維加斯會議中心舉辦的 CES 2020。

-CES 2020 Innovation Award: shorturl.at/uJNW4

-工研院機器人與智能穿戴系統榮獲CES 2020創新獎 聚焦市場導向創新成果獲國際肯定: shorturl.at/wxD35

-CES 2020: ITRI’s Featured Innovations in Digital Health: shorturl.at/vOQ48

-ITRI Exhibits Artificial Intelligence (AI) & Robotics and Digital Health Technology Innovations at CES 2020: shorturl.at/zORUV



  • 景深相機
  • 麥克風
  • 2D 雷射掃描


  • 巡邏服務
  • 攝影服務
  • 睡眠分析
  • 飲食分析


  • 3D 地圖繪製
  • 視覺定位


  • 2D 導航系統
  • 避障系統


  • 人工智能
  • Server API 調用


  • 語音辨識
  • Server API 調用

SLAM 和導航技術

PECOLA 的 SLAM 技術是使用 RGB-D 相機依照環境與其中的特徵來繪製地圖,使用的 SLAM 方法是透過檢測器去偵測起終點,判斷是否回到原始位置進而繪製出地圖,此檢測器使用詞袋模型來確定新圖像來自先前位置或新位置的可能性。

PECOLA 的導航系統是透過 2D 雷射來檢測附近的障礙物,然後根據障礙物的位置和在 SLAM 期間創建的地圖來製定機器人的路徑,並且根據通過傳感器接收到的最新障礙物信息,頻繁的更新軌跡,機器人的軌跡也會在執行時間依據最短距離和障礙物的位置而選擇最佳路徑。

學歷 Education

國立臺灣海洋大學 資訊工程學系 碩士,2015 年 9 月 - 2018 年 6 月

GPA : 3.4/4

畢業論文: Design and Implementation of a Roll-Call Assistant System (設計與實作一個點名輔助系統)
Designed and implemented a quick, convenient and reliable method of taking the attendance of students in the classroom with the use of their mobile devices.

國立臺灣海洋大學 資訊工程學系 學士,2011 年 9 月 - 2015 年 6 月

GPA : 3.5/4

Powered By CakeResume