BEGIN:VCALENDAR

VERSION:2.0

PRODID:-//wp-events-plugin.com//6.6.4.4//EN

TZID:Asia/Jerusalem

X-WR-TIMEZONE:Asia/Jerusalem
BEGIN:VEVENT

UID:0-452@aerospace.technion.ac.il

DTSTART;TZID=Asia/Jerusalem:20161214T163000

DTEND;TZID=Asia/Jerusalem:20161214T173000

DTSTAMP:20230603T190428Z

URL:https://aerospace.technion.ac.il/events/vision-based-dynamic-target-tr
 ajectory-and-ego-motion-estimation-using-incremental-light-bundle-adjustme
 nt/

SUMMARY:Vision-based Dynamic Target Trajectory and Ego-motion Estimation Us
 ing Incremental Light Bundle Adjustment
DESCRIPTION:Lecturer:Michael Chojnacki\n Faculty:Technion Autonomous System
  Program (TASP)\n Institute:Technion – Israel Institute of Technology\n 
 Location:Classroom 165\, ground floor\, Library\, Aerospace Eng.\n Zoom: \
 n Abstract: \n Details: \n We investigate autonomous navigation and target
  tracking in unknown or uncertain environments\, which have become core ca
 pabilities in numerous robotics applications. In lack of external source o
 f information (e.g. GPS)\, the robot has to infer its own sate and the tar
 get's trajectory based on its own sensors observations. For the vision-bas
 ed case\, the corresponding maximum a posteriori (MAP) estimation is typic
 ally obtained in a process known as bundle adjustment (BA) in computer vis
 ion\, or simultaneous localization and mapping (SLAM) in robotics. Both ca
 ses involve optimization over camera ego-motion (e.g. poses) and all the o
 bserved 3D features including the dynamic object\, even when the environme
 nt itself is of little or no interest. Furthermore\, the optimization is p
 erformed incrementally as new surrounding features are observed\, and thus
 \, becomes computationally expensive as more data is added to the problem.
 \nIn this work\, we propose an efficient method for estimating a monocular
  camera's ego-motion along with dynamic target's trajectory and velocity u
 sing light bundle adjustment (iLBA). iLBA method allows for ego-motion cal
 culation using two-view and three-view constraints\, eliminating the need 
 for expensive 3D points reconstruction. Given data association and assumin
 g no localization signal\, we add to the iLBA optimization framework one s
 ingle 3D point representing the target each time it is in view. Given a ta
 rget motion model (e.g. constant velocity)\, we can then calculate the tar
 get's trajectory and velocity with the camera's ego-motion using graphical
  models and incremental inference techniques. The target will therefore be
  the only 3D reconstructed point in the process. We study accuracy and com
 putational costs by comparing our method to standard BA\, using synthetic 
 and real-imagery datasets collected at the Autonomous Navigation and Perce
 ption Lab at the Technion.
CATEGORIES:Seminars
LOCATION:Classroom 165\, ground floor\, Library\, Aerospace Eng.

END:VEVENT

BEGIN:VTIMEZONE

TZID:Asia/Jerusalem

X-LIC-LOCATION:Asia/Jerusalem

BEGIN:STANDARD

DTSTART:20161030T010000

TZOFFSETFROM:+0300

TZOFFSETTO:+0200

TZNAME:IST

END:STANDARD

END:VTIMEZONE
END:VCALENDAR