archive-at.com » AT » T » TUGRAZ.AT

Total: 196

Choose link from "Titles, links and description words view":

Or switch to "Titles and links view".
  • Learning, Recognition, and Surveillance @ ICG, Publication tr_koestinger10
    Martin Koestinger and Peter M Roth and Horst Bischof nbsp nbsp title Planar Trademark and Logo Retrieval nbsp nbsp institution Graz University of Technology Inst f nbsp Computer nbsp nbsp nbsp nbsp Graphics and Vision nbsp nbsp number ICG TR

    Original URL path: http://lrs.icg.tugraz.at/getBibEntry.php?id=tr_koestinger10 (2016-02-14)
    Open archived version from archive


  • Learning, Recognition, and Surveillance @ ICG, Publication tr_pmroth09
    Peter M Roth and Christian Leistner and Horst Bischof nbsp nbsp title Learning Person Detectors from Multiple Cameras nbsp nbsp institution Graz University of Technology Inst f nbsp Computer nbsp nbsp nbsp nbsp Graphics and Vision nbsp nbsp number ICG

    Original URL path: http://lrs.icg.tugraz.at/getBibEntry.php?id=tr_pmroth09 (2016-02-14)
    Open archived version from archive

  • Learning, Recognition, and Surveillance @ ICG, Publication tr_koestinger09
    and Paul Wohlhart and Peter M Roth and Horst Bischof nbsp nbsp title KIRAS MDL State of the Art Report nbsp nbsp institution Graz University of Technology Inst f nbsp Computer nbsp nbsp nbsp nbsp Graphics and Vision nbsp nbsp

    Original URL path: http://lrs.icg.tugraz.at/getBibEntry.php?id=tr_koestinger09 (2016-02-14)
    Open archived version from archive

  • Learning, Recognition, and Surveillance @ ICG, Publication tr_hirzer08
    nbsp nbsp title Marker Detection for Augmented Reality Applications nbsp nbsp institution Graz University of Technology Inst f nbsp Computer nbsp nbsp nbsp nbsp Graphics and Vision nbsp nbsp number

    Original URL path: http://lrs.icg.tugraz.at/getBibEntry.php?id=tr_hirzer08 (2016-02-14)
    Open archived version from archive

  • Learning, Recognition, and Surveillance @ ICG, Publication tr_pmroth08
    Peter M Roth and Martin Winter nbsp nbsp title Survey of Appearance based Methods for Object Recognition nbsp nbsp institution Graz University of Technology Inst f nbsp Computer nbsp nbsp nbsp nbsp Graphics and Vision nbsp nbsp number ICG TR

    Original URL path: http://lrs.icg.tugraz.at/getBibEntry.php?id=tr_pmroth08 (2016-02-14)
    Open archived version from archive

  • Project MOBI-Trick - MOBile TRaffic ChecKer @ ICG
    camera is already available and stereo information increases detection accuracies Each time the system moves it needs to adapt to the changing situation This requires adaptive calibration and online learning Mobile systems often work from batteries In addition there is not much space to include intricate cooling systems Thus the system must be designed to be very energy efficient New approaches for dynamic power management will be explored in the

    Original URL path: http://lrs.icg.tugraz.at/mobitrick/index.php (2016-02-14)
    Open archived version from archive

  • Learning, Recognition, and Surveillance @ ICG
    person 0002 of view A corresponds to person 0002 of view B and so on The remaining persons in each camera view i e person 0201 to 0385 in view A and person 0201 to 0749 in view B complete the gallery set of the corresponding view Hence a typical evaluation consists of searching the 200 first persons of one camera view in all persons of the other view This means that there are two possible evalutaion procedures either the probe set is drawn from view A and the gallery set is drawn from view B A to B used in 1 or vice versa B to A See the following figures for details Camera setup Persons of view A and B The first 200 persons appear in both views while the remaining persons in each view complete the gallery set of that view Evaluation procedure A to B Probe set the first 200 persons of view A Gallery set all 749 persons of view B Evaluation procedure B to A Probe set the first 200 persons of view B Gallery set all 385 persons of view A Single Shot Multi Shot We provide two versions of the dataset one representing the single shot scenario and one representing the multi shot scenario The multi shot version contains multiple images per person at least five per camera view The exact number depends on a person s walking path and speed as well as occlusions The single shot version contains just one randomly selected image per person trajectory i e one image from view A and one image from view B Extraction of multiple images per person Results To allow a comparison to other methods we provide results in form of Cumulative Matching Characteristic CMC curves on the multi shot version of

    Original URL path: http://lrs.icg.tugraz.at/datasets/prid/index.php (2016-02-14)
    Open archived version from archive

  • Learning, Recognition, and Surveillance @ ICG
    we had a last minute change that was not backported to matlab The problem is we didn t find any consistent convention of how to define rectangle bounding boxes for out of plane rotated faces let alone how to compute it from landmark annotations so we defined the following We calculate the pose and scale of the face from the fit of the mean 3d face model to the annotated points In the 3d model there is a point specified right between the eyes We project this point into the image and take it as an anchor point for the bounding box such that the eyes we stay at approximately at the same point in the cropped face Initially we just placed the rectangle such that the anchor point is horizontally centered and at one third from the top The problem with this definition is that it doesn t work for in plane rotation So in the c binary we now incorporated a shift of the bounding box center along the negative up vector This gives a better fit If you want a consistent experience with I personally propose to use the c binary to calculate rectangles and save in the database and from then on just take it from there Anyway our approach is just one way to go We wanted to experiment with different versions of bounding box annotations keeping different things fixed for instance you could also try to center on the tip of the nose or keep the eyes at fixed locations in order to find out which convention gives the best crops for training data for different detection algorithms Some of the JPEG files contain data errors Some of the original image files are damaged This is not limited to the archives which are online It seems to be that it happend at the time when we downloaded the images or even they were corrupted before So consider it as real world feature of the database Some of the images have feature coordinates with the values 1 1 What is the significance of these values 1 1 is a convention of our facedbsql library that the landmark is not set or has been deleted Just ignore these Some of the images have feature coordinates with the values x 0 or y 0 What is the significance of these values The 0 values for the x or y coordinates of landmark points are correctly annotated So the points are located at the image boundary Some of the coordinates have digits after the decimal point others don t Is this something I should worry about This is nothing to worry about The reason is that the data was annotated by our interns without zoom functionality in the gui Zoom functionality was introduced later for high res images that show rather small faces in repect to the image size not in resolution Therefore only some values are decimal valued annotations that we corrected later Some coordinates are negative

    Original URL path: http://lrs.icg.tugraz.at/research/aflw/faq.php (2016-02-14)
    Open archived version from archive