I haven't tried this yet, But I think it would work for me.
http://agbs.kyb.tuebingen.mpg.de/km/bb/showthread.php?tid=2062
Friday, November 9, 2012
Saturday, November 3, 2012
materials from weibo
1.
2.
数学之美番外篇:平凡而又神奇贝的叶斯方法
2.
The End of Theory: The Data Deluge Makes the Scientific Method Obsolete
3.
机器学习中的相似性度量
4.
http://blog.csdn.net/feixiaoxing/article/details/6846664
6. segmentation fault in linux
http://vdisk.weibo.com/s/hp3KT
7.
4.
白话经典算法系列之七 堆与堆排序
http://blog.csdn.net/morewindows/article/details/6709644
5.
一步一步写算法(之堆排序)http://blog.csdn.net/feixiaoxing/article/details/6846664
6. segmentation fault in linux
http://vdisk.weibo.com/s/hp3KT
7.
Diffie-Hellman密钥交换算法的原理及程序演示
http://blog.csdn.net/jcwkyl/article/details/3554067
8.
Mathematica的内部实现原理
http://learn.tsinghua.edu.cn:8080/2006012033/homepage/CA/M4-internal.html
9. 大数据研究的科学价值
http://vdisk.weibo.com/s/hlfiY
10. mit talk
http://www.scotthyoung.com/blog/mit-challenge/
11.hacker
http://www.hackerfactor.com/blog/index.php?/archives/432-Looks-Like-It.html
Wednesday, October 31, 2012
Thursday, October 11, 2012
set up auto login to the remote server
Here is the link tells you how to set up such stuff.
http://ubuntu-tutorials.com/2007/02/05/unattended-ssh-login-public-key-ssh-authorization-ssh-automatic-login/
http://ubuntu-tutorials.com/2007/02/05/unattended-ssh-login-public-key-ssh-authorization-ssh-automatic-login/
Monday, September 3, 2012
set up printers in guest machine
I will put all the problems I encountered when set up a guest machine either win7 or Ubuntu here.
1. set up printers
http://download.parallels.com/desktop/v6/docs/en/Parallels_Desktop_Users_Guide/23529.htm
1. set up printers
http://download.parallels.com/desktop/v6/docs/en/Parallels_Desktop_Users_Guide/23529.htm
Thursday, August 30, 2012
opencv all version and install instructions
This link contains all versions of OpenCV.
http://sourceforge.net/projects/opencvlibrary/files/
This link discusses the installation of opencv 2.3.1.
http://www.ozbotz.org/opencv-installation-2.3.1/ (this is for 32-bit. for 64-bit refer to the third link)
http://thebitbangtheory.wordpress.com/2011/10/23/how-to-install-opencv-2-3-1-in-ubuntu-11-10-oneiric-ocelot-with-python-support/
http://www.ozbotz.org/opencv-installation/
http://sourceforge.net/projects/opencvlibrary/files/
This link discusses the installation of opencv 2.3.1.
http://www.ozbotz.org/opencv-installation-2.3.1/ (this is for 32-bit. for 64-bit refer to the third link)
http://thebitbangtheory.wordpress.com/2011/10/23/how-to-install-opencv-2-3-1-in-ubuntu-11-10-oneiric-ocelot-with-python-support/
http://www.ozbotz.org/opencv-installation/
Thursday, July 19, 2012
Sunday, June 10, 2012
Make 32-bit Applications Work on a 64-bit Operating System
This is what I encountered when trying to run Prof. David Lowe's SIFT detector in 64-bit ubuntu. The binary code of sift is compiled in 32-bit machine. So, you need to do the following things to make his code run for you.
It is possible to install and use 32-bit software on a 64-bit computer in different ways:
- Installation of 32-bit compatibility libraries (ia32-libs or Multiarch support)
- A 32-bit chroot
- Full virtualization through KVM or VirtualBox
Wednesday, June 6, 2012
Run C executable code in MATLAB m file
use system() to run it, but usually you will get the following error:
so you need to do the following things to fix it.
http://www.mathworks.se/matlabcentral/answers/8079-how-to-get-working-matlab-coder-in-linux
This link is much clearer:
http://judsonsnotes.com/notes/index.php?option=com_content&view=article&id=659%3ainstalling-matlab-in-ubuntu-1110&catid=37%3atech-notes&Itemid=59
/MATLAB/R2011a/sys/os/glnxa64/libstdc++.so.6: version `GLIBCXX_3.4.14' not found (required by /usr/lib/libppl_c.so.4)
so you need to do the following things to fix it.
cd $MATLAB cd sys/os/glnx86 mkdir old mv libstdc++.* libg2c.* libgcc_s* oldAfter i have to add /usr/lib to the enviroment variables
export LD_LIBRARY_PATH=/usr/lib32:/usr/lib:$LD_LIBRARY_PATH
Take a reference to the link:
http://www.mathworks.se/matlabcentral/answers/8079-how-to-get-working-matlab-coder-in-linux
This link is much clearer:
http://judsonsnotes.com/notes/index.php?option=com_content&view=article&id=659%3ainstalling-matlab-in-ubuntu-1110&catid=37%3atech-notes&Itemid=59
Friday, June 1, 2012
Friday, May 25, 2012
kinect calibration
I found this source is good for beginners.
http://www.informatik.uni-freiburg.de/~engelhar/calibration.html
Although the code is written under xcode. You only need to make slight changes to work under other os.
Matt's codes:
http://graphics.stanford.edu/~mdfisher/Kinect.html
ROS technical documents:
http://www.ros.org/wiki/kinect_calibration/technical
http://www.ros.org/wiki/kinect_node/Calibration
Nicolas codes:
http://nicolas.burrus.name/index.php/Research/KinectCalibration#tocLink0
Programming tilts:
https://groups.google.com/forum/?fromgroups#!topic/openkinect/AxNRhG_TPHg
http://www.informatik.uni-freiburg.de/~engelhar/calibration.html
Although the code is written under xcode. You only need to make slight changes to work under other os.
Matt's codes:
http://graphics.stanford.edu/~mdfisher/Kinect.html
ROS technical documents:
http://www.ros.org/wiki/kinect_calibration/technical
http://www.ros.org/wiki/kinect_node/Calibration
Nicolas codes:
http://nicolas.burrus.name/index.php/Research/KinectCalibration#tocLink0
Programming tilts:
https://groups.google.com/forum/?fromgroups#!topic/openkinect/AxNRhG_TPHg
Wednesday, May 16, 2012
Install JAVA JDK7 on Ubuntu
How to install from archive binary file?
http://askubuntu.com/questions/55848/how-do-i-install-oracle-java-jdk-7
Instructions in official website?
http://docs.oracle.com/javase/7/docs/webnotes/install/linux/linux-jdk.html#general
http://askubuntu.com/questions/55848/how-do-i-install-oracle-java-jdk-7
Instructions in official website?
http://docs.oracle.com/javase/7/docs/webnotes/install/linux/linux-jdk.html#general
Saturday, April 21, 2012
compile GICP implemented by Alex
1. add ANN library in your system
2. add boost c++ library in your system
3. make changes to the Makefile to make sure the headers and libraries are linked correctly
some problems I encountered...
1) error while loading shared libraries: libboost_program_options.so.1.49.0: cannot open shared object file: No such file or directory
solution: http://codelikezell.com/how-to-modify-the-gnu-linkers-default-search-path/
2. add boost c++ library in your system
3. make changes to the Makefile to make sure the headers and libraries are linked correctly
some problems I encountered...
1) error while loading shared libraries: libboost_program_options.so.1.49.0: cannot open shared object file: No such file or directory
solution: http://codelikezell.com/how-to-modify-the-gnu-linkers-default-search-path/
Wednesday, March 7, 2012
Video Talk on Unsupervised learning by Andrew Ng
Input->low-level features->learning algorithm
Feature learning via sparse coding
ML application:
Activity recognition (Hollywood 2 benchmark);
Sparse coding on audio;
sparse DBN for audio;
phoneme classification (TIMIT benchmark);
http://cs249a.stanford.edu
Feature learning via sparse coding
ML application:
Activity recognition (Hollywood 2 benchmark);
Sparse coding on audio;
sparse DBN for audio;
phoneme classification (TIMIT benchmark);
http://cs249a.stanford.edu
http://www.youtube.com/watch?v=ZmNOAtZIgIk
Thursday, March 1, 2012
The Computer Vision Industry _ Arranged by Prof. David Lowe
http://www.cs.ubc.ca/~lowe/vision.html
http://www.evolution.com/
This link is very useful for you to learn about the computer vision industry.
I should check this website frequently to learn what is going on in this industry, and think about what can I do in this industry.
Other interesting things:
http://www.michaelbach.de/ot/index.html
http://www.evolution.com/
This link is very useful for you to learn about the computer vision industry.
I should check this website frequently to learn what is going on in this industry, and think about what can I do in this industry.
Other interesting things:
http://www.michaelbach.de/ot/index.html
Friday, February 24, 2012
Talk with Prof. Jana about SIFT features
SIFT features have two parts: location and scale. (Figure out the details about them later).
Now the idea is trying to replace the scale part, when combining with the depth data. How to decide the scale selection mechanism consistent to depth data.
Take a reference to the NYU paper presented by Reza, which takes three different scales.
See the SIFT tutorial in vlfeat.org. Think about how to customize the sift: location, scale and orientation.
About the surfel:
Know how to get the surfel. Not go into the render part.
GICP algorithm developed by Alex,
search how to run it on MATLAB platform.
run script shell from matlab.
write data to png file...NOT SOLVED
Now the idea is trying to replace the scale part, when combining with the depth data. How to decide the scale selection mechanism consistent to depth data.
Take a reference to the NYU paper presented by Reza, which takes three different scales.
See the SIFT tutorial in vlfeat.org. Think about how to customize the sift: location, scale and orientation.
About the surfel:
Know how to get the surfel. Not go into the render part.
GICP algorithm developed by Alex,
search how to run it on MATLAB platform.
run script shell from matlab.
write data to png file...NOT SOLVED
Wednesday, February 22, 2012
Kinect Sensor Programming
Good resources on line:
http://graphics.stanford.edu/~mdfisher/Kinect.html
Notes:
Consists of:
The Kinect is an attachment for the Xbox 360 that combines four microphones, a standard RGB camera, a depth camera, and a motorized tilt.
Raw Sensor Value:
The raw sensor values returned by the Kinect's depth sensor are not directly proportional to the depth. Instead, they scale with the inverse of the depth. People have done relatively accurate studies to determine these coefficients with high accuracy, see the ROS kinect_node page and work by Nicolas Burrus. Both of these are excellent resources for working with the depth and color cameras.
Calibration:
The Kinect reads infrared and color data with different cameras. This makes it very challenging to determine the color of a given depth pixel or the depth of a given color pixel. Doing this requires knowing the intrinsic parameters of both cameras (they are similar, but not identical) and knowing the extrinsic mapping between them. Doing this correctly involves working with things like OpenCV which can be kind of a slow for realtime applications and can be a frustrating library to interact with in general. This calibration is also challenging because the depth camera can't see simple checkerboard patterns - it needs a checkerboard pattern that also contains regular and calibrated depth disparities. Fortuantely Nicolas Burrus posted the calibration for his Kinect color and depth cameras here, which gave pretty good results for my Kinect although there was still some noticeable misalignment.
http://graphics.stanford.edu/~mdfisher/Kinect.html
Notes:
Consists of:
The Kinect is an attachment for the Xbox 360 that combines four microphones, a standard RGB camera, a depth camera, and a motorized tilt.
Raw Sensor Value:
The raw sensor values returned by the Kinect's depth sensor are not directly proportional to the depth. Instead, they scale with the inverse of the depth. People have done relatively accurate studies to determine these coefficients with high accuracy, see the ROS kinect_node page and work by Nicolas Burrus. Both of these are excellent resources for working with the depth and color cameras.
Calibration:
The Kinect reads infrared and color data with different cameras. This makes it very challenging to determine the color of a given depth pixel or the depth of a given color pixel. Doing this requires knowing the intrinsic parameters of both cameras (they are similar, but not identical) and knowing the extrinsic mapping between them. Doing this correctly involves working with things like OpenCV which can be kind of a slow for realtime applications and can be a frustrating library to interact with in general. This calibration is also challenging because the depth camera can't see simple checkerboard patterns - it needs a checkerboard pattern that also contains regular and calibrated depth disparities. Fortuantely Nicolas Burrus posted the calibration for his Kinect color and depth cameras here, which gave pretty good results for my Kinect although there was still some noticeable misalignment.
Tuesday, February 21, 2012
Install Openkinect on Ubuntu
1. Follow the link: http://acberg.com/kinect/
2. After step 1, almost everything has been down. But we are not so lucky enough. If you want to run the test.m written by Alex, some changes must be made.
First, Alex's code is based on the old version of libfreenect(libfreenect.0.0.1), not the one you just installed(libfreenect.1.0.0), so you need to install the old version although there existed some bugs. Otherwise, you will have to modify Alex's code which would be a great job. The old version can be download from here:
https://launchpad.net/ubuntu/+source/libfreenect/1:0.0.1+20101211+2-1
Also, you need to make some changes to Alex text.m to mex matlab function.
mex -I/usr/local/include/libfreenect ...
-I/usr/local/include /usr/local/lib/libfreenect.a ...
/usr/lib/libusb-1.0.a ...
kinect_mex.cc -lpthread -lrt
if you encounter the following error:
/usr/local/lib/libusb-1.0.a(libusb_1_0_la-io.o): In function `libusb_try_lock_events':/home/tpair/libusb-1.0.2/libusb/io.c:1254: undefined reference to `pthread_mutex_trylock'
/usr/local/lib/libusb-1.0.a(libusb_1_0_la-linux_usbfs.o): In function `op_clock_gettime':os/linux_usbfs.c:2011: undefined reference to `clock_gettime'
:os/linux_usbfs.c:2013: undefined reference to `clock_gettime'
solution: Try using the option "-lpthread" to get the pthread stuff, and "-lrt" to get clock_gettime().
Second, if you encounter the error like this: Failed to claim the usb interface. You need to run the command in the terminal.
$>rmmod gspca_kinect
You need to do this whenever you restart your system.
Third, you also need to do the following:(http://openkinect.org/wiki/Getting_Started)
Also make a file with rules for the Linux device manager:
2. After step 1, almost everything has been down. But we are not so lucky enough. If you want to run the test.m written by Alex, some changes must be made.
First, Alex's code is based on the old version of libfreenect(libfreenect.0.0.1), not the one you just installed(libfreenect.1.0.0), so you need to install the old version although there existed some bugs. Otherwise, you will have to modify Alex's code which would be a great job. The old version can be download from here:
https://launchpad.net/ubuntu/+source/libfreenect/1:0.0.1+20101211+2-1
Also, you need to make some changes to Alex text.m to mex matlab function.
mex -I/usr/local/include/libfreenect ...
-I/usr/local/include /usr/local/lib/libfreenect.a ...
/usr/lib/libusb-1.0.a ...
kinect_mex.cc -lpthread -lrt
if you encounter the following error:
/usr/local/lib/libusb-1.0.a(libusb_1_0_la-io.o): In function `libusb_try_lock_events':/home/tpair/libusb-1.0.2/libusb/io.c:1254: undefined reference to `pthread_mutex_trylock'
/usr/local/lib/libusb-1.0.a(libusb_1_0_la-linux_usbfs.o): In function `op_clock_gettime':os/linux_usbfs.c:2011: undefined reference to `clock_gettime'
:os/linux_usbfs.c:2013: undefined reference to `clock_gettime'
solution: Try using the option "-lpthread" to get the pthread stuff, and "-lrt" to get clock_gettime().
Second, if you encounter the error like this: Failed to claim the usb interface. You need to run the command in the terminal.
$>rmmod gspca_kinect
You need to do this whenever you restart your system.
Third, you also need to do the following:(http://openkinect.org/wiki/Getting_Started)
Also make a file with rules for the Linux device manager:
sudo nano /etc/udev/rules.d/51-kinect.rulesCopy and paste:
# ATTR{product}=="Xbox NUI Motor" SUBSYSTEM=="usb", ATTR{idVendor}=="045e", ATTR{idProduct}=="02b0", MODE="0666" # ATTR{product}=="Xbox NUI Audio" SUBSYSTEM=="usb", ATTR{idVendor}=="045e", ATTR{idProduct}=="02ad", MODE="0666" # ATTR{product}=="Xbox NUI Camera" SUBSYSTEM=="usb", ATTR{idVendor}=="045e", ATTR{idProduct}=="02ae", MODE="0666"
Be sure to log out and back in.
Be sure the created file begins with number which is larger than existing and not violating with others.
Friday, February 10, 2012
vision links and codes
http://www.cs.washington.edu/education/courses/cse590ss/01wi/
RANSAC Implementation
http://www.csd.uwo.ca/~iali24/Code.html
http://www.mrpt.org/RANSAC_C++_examples
http://isiswiki.georgetown.edu/zivy/#software
http://vision.ece.ucsb.edu/~zuliani/Research/RANSAC/RANSAC.shtml
http://vision.ece.ucsb.edu/~zuliani/Research/RANSAC/docs/RANSAC4Dummies.pdf
http://www.csse.uwa.edu.au/~pk/research/matlabfns/#match
RANSAC Implementation
http://www.csd.uwo.ca/~iali24/Code.html
http://www.mrpt.org/RANSAC_C++_examples
http://isiswiki.georgetown.edu/zivy/#software
http://vision.ece.ucsb.edu/~zuliani/Research/RANSAC/RANSAC.shtml
http://vision.ece.ucsb.edu/~zuliani/Research/RANSAC/docs/RANSAC4Dummies.pdf
http://www.csse.uwa.edu.au/~pk/research/matlabfns/#match
Wednesday, February 8, 2012
Tuesday, February 7, 2012
Notes for CS682_Lecture 2
Aperture(光圈):
In optics, an aperture is a hole or an opening through which light travels. More specifically, the aperture of an optical system is the opening that determines the cone angle of a bundle of rays that come to a focus in the image plane.
Diffraction:
Diffraction refers to various phenomena which occur when a wave encounters an obstacle. Italian scientist Francesco Maria Grimaldi coined the word "diffraction" and was the first to record accurate observations of the phenomenon in 1665.[2][3] In classical physics, the diffraction phenomenon is described as the apparent bending of waves around small obstacles and the spreading out of waves past small openings.
Thin Lens Formular:
1/D' + 1/D = 1/f
And point satisfying the thin lens formular is in forcus.
Retina(视网膜):
The vertebrate retina (from Latin rēte, meaning "net") is a light-sensitive tissue lining the inner surface of the eye. The optics of the eye create an image of the visual world on the retina, which serves much the same function as the film in a camera.
Chromatic Aberration(色差):
In optics, chromatic aberration (CA, also called achromatism or chromatic distortion) is a type of distortion in which there is a failure of a lens to focus all colors to the same convergence point. It occurs because lenses have a different refractive index for different wavelengths of light (the dispersion of the lens). The refractive index decreases with increasing wavelength.
Spherical Aberration:
Spherical aberration is an optical effect observed in an optical device (lens, mirror, etc.) that occurs due to the increased refraction of light rays when they strike a lens or a reflection of light rays when they strike a mirror near its edge, in comparison with those that strike nearer the centre. It signifies a deviation of the device from the norm, i.e., it results in an imperfection of the produced image.
In optics, an aperture is a hole or an opening through which light travels. More specifically, the aperture of an optical system is the opening that determines the cone angle of a bundle of rays that come to a focus in the image plane.
Diffraction:
Diffraction refers to various phenomena which occur when a wave encounters an obstacle. Italian scientist Francesco Maria Grimaldi coined the word "diffraction" and was the first to record accurate observations of the phenomenon in 1665.[2][3] In classical physics, the diffraction phenomenon is described as the apparent bending of waves around small obstacles and the spreading out of waves past small openings.
Thin Lens Formular:
1/D' + 1/D = 1/f
And point satisfying the thin lens formular is in forcus.
Retina(视网膜):
The vertebrate retina (from Latin rēte, meaning "net") is a light-sensitive tissue lining the inner surface of the eye. The optics of the eye create an image of the visual world on the retina, which serves much the same function as the film in a camera.
Chromatic Aberration(色差):
In optics, chromatic aberration (CA, also called achromatism or chromatic distortion) is a type of distortion in which there is a failure of a lens to focus all colors to the same convergence point. It occurs because lenses have a different refractive index for different wavelengths of light (the dispersion of the lens). The refractive index decreases with increasing wavelength.
Spherical Aberration:
Spherical aberration is an optical effect observed in an optical device (lens, mirror, etc.) that occurs due to the increased refraction of light rays when they strike a lens or a reflection of light rays when they strike a mirror near its edge, in comparison with those that strike nearer the centre. It signifies a deviation of the device from the norm, i.e., it results in an imperfection of the produced image.
Sunday, February 5, 2012
Saturday, February 4, 2012
OpenGL Setup
How to use opengl:
http://cacs.usc.edu/education/cs596/OGL_Setup.pdf
http://cacs.usc.edu/education/cs596/OGL_Setup.pdf
How-to: Successfully Install Kinect on Windows (OpenNI and NITE)
http://www.codeproject.com/Articles/148251/How-to-Successfully-Install-Kinect-on-Windows-Open
http://kheresy.wordpress.com/2010/12/25/use_kinect_on_windows/ (in chineses)
Be careful when you comiple opencv with CMake, especially the last one. The directory is not the binary directory within openni, otherwise is the directory within primesensor:
set(OPENNI_LIB_DESCR "Path to the directory of OpenNI libraries" CACHE INTERNAL "Description" )
set(OPENNI_INCLUDE_DESCR "Path to the directory of OpenNI includes" CACHE INTERNAL "Description" )
set(OPENNI_PRIME_SENSOR_MODULE_BIN_DESCR "Path to the directory of PrimeSensor Module binaries" CACHE INTERNAL "Description" )
http://kheresy.wordpress.com/2010/12/25/use_kinect_on_windows/ (in chineses)
Be careful when you comiple opencv with CMake, especially the last one. The directory is not the binary directory within openni, otherwise is the directory within primesensor:
set(OPENNI_LIB_DESCR "Path to the directory of OpenNI libraries" CACHE INTERNAL "Description" )
set(OPENNI_INCLUDE_DESCR "Path to the directory of OpenNI includes" CACHE INTERNAL "Description" )
set(OPENNI_PRIME_SENSOR_MODULE_BIN_DESCR "Path to the directory of PrimeSensor Module binaries" CACHE INTERNAL "Description" )
Friday, February 3, 2012
RGBD Data set
1. RGB-D Object Dataset
http://www.cs.washington.edu/rgbd-dataset/software.html
2. B3DO: Berkeley 3-D Object Dataset
http://kinectdata.com/
3. Kinect with ground truth from Vicon (ETH)
http://www.asl.ethz.ch/research/datasets
4.
http://www.cs.washington.edu/rgbd-dataset/software.html
2. B3DO: Berkeley 3-D Object Dataset
http://kinectdata.com/
3. Kinect with ground truth from Vicon (ETH)
http://www.asl.ethz.ch/research/datasets
4.
OpenNI
Introducing OpenNI
The OpenNI organization is an industry-led, not-for-profit organization formed to certify and promote the compatibility and interoperability of Natural Interaction (NI) devices, applications and middleware.
As a first step towards this goal, the organization has made available an open source framework – the OpenNI framework – which provides an application programming interface (API) for writing applications utilizing natural interaction. This API covers communication with both low level devices (e.g. vision and audio sensors), as well as high-level middleware solutions (e.g. for visual tracking using computer vision).
On this website you will be able to download the open source OpenNI framework, and get information on third-party hardware and middleware solutions which are OpenNI compliant
More?http://www.openni.org/
PCL - Point Cloud Library
What is PCL?
The Point Cloud Library (or PCL) is a large scale, open project [1] for 3D point cloud processing. The PCL framework contains numerous state-of-the art algorithms including filtering, feature estimation, surface reconstruction, registration, model fitting and segmentation. These algorithms can be used, for example, to filter outliers from noisy data, stitch 3D point clouds together, segment relevant parts of a scene, extract keypoints and compute descriptors to recognize objects in the world based on their geometric appearance, and create surfaces from point clouds and visualize them -- to name a few.PCL is released under the terms of the BSD license and is open source software. It is free for commercial and research use.
PCL is cross-platform, and has been successfully compiled and deployed on Linux, MacOS, Windows, and Android. To simplify development, PCL is split into a series of smaller code libraries, that can be compiled separately. This modularity is important for distributing PCL on platforms with reduced computational or size constraints (for more information about each module see the documentation page). Another way to think about PCL is as a graph of code libraries, similar to the Boost set of C++ libraries.
Want to learn more? Flow this link.
http://pointclouds.org/about.html
The PCD (Point Cloud Data) file format
The PCD (Point Cloud Data) file format
This document describes the PCD (Point Cloud Data) file format, and the way it is used inside Point Cloud Library (PCL).Why a new file format?
The PCD file format is not meant to reinvent the wheel, but rather to complement existing file formats that for one reason or another did not/do not support some of the extensions that PCL brings to n-D point cloud processing.PCD is not the first file type to support 3D point cloud data. The computer graphics and computational geometry communities in particular, have created numerous formats to describe arbitrary polygons and point clouds acquired using laser scanners. Some of these formats include:
- PLY - a polygon file format, developed at Stanford University by Turk et al
- STL - a file format native to the stereolithography CAD software created by 3D Systems
- OBJ - a geometry definition file format first developed by Wavefront Technologies
- X3D - the ISO standard XML-based file format for representing 3D computer graphics data
- and many others
PCD versions
PCD file formats might have different revision numbers, prior to the release of Point Cloud Library (PCL) version 1.0. These are numbered with PCD_Vx (e.g., PCD_V5, PCD_V6, PCD_V7, etc) and represent version numbers 0.x for the PCD file.The official entry point for the PCD file format in PCL however should be version 0.7 (PCD_V7).
Want to learn more, click here:
Wednesday, February 1, 2012
SIFT Features
The website talking about Affine Covariant Features:
http://www.robots.ox.ac.uk/~vgg/research/affine/index.html#publications
SiftGPU: A GPU Implementation of Scale Invariant Feature Transform (SIFT):
http://cs.unc.edu/~ccwu/siftgpu/
http://www.robots.ox.ac.uk/~vgg/research/affine/index.html#publications
SiftGPU: A GPU Implementation of Scale Invariant Feature Transform (SIFT):
http://cs.unc.edu/~ccwu/siftgpu/
Indoor Scene Segmentation using a Structured Light Sensor
Indoor Scene Segmentation using a Structured Light Sensor
http://cs.nyu.edu/~silberman/papers/indoor_seg_struct_light.pdf
Pre-processing:http://cs.nyu.edu/~silberman/papers/indoor_seg_struct_light.pdf
1)remove holes in the depth image <- filter image by the cross-bilateral filter of pairs
2)rotate RGB image and depth map and labels to eliminate any pith and roll< - 3-axis accelerometer provided by Kinect
There exists some offset between RGB image and Depth image provided by Kinect. So, calibration is carried out to obtain precise spatial alignment between the depth and RGB images.
One method: using a set of checkerboard images in conjunction with the calibration tool of Burrus. This also provided the homography between the two cameras.
One method: using a set of checkerboard images in conjunction with the calibration tool of Burrus. This also provided the homography between the two cameras.
In this paper, the class transition penalty is very simple, which is as follows:
In some applications, the penalty can be learned from training set, and different penalty should be assigned among different class transitions.
Tuesday, January 31, 2012
OpenCV Installation
1. Install OpenCV with Code:Blocks
1)Followed instruction here: http://opencv.willowgarage.com/wiki/InstallGuide
if you download opencv 2.3.1 extract files to a folder. And in this folder there have already compiled files for Mingw, vs9, vs10. So you donot need to use cmake to compile again.(But these files cannot be used successfully, so you need to use CMake to compile again)
2)configue settings for codeblocks
http://opencv.willowgarage.com/wiki/MinGW
after you finish compiling using cmake, please do not delete the cache, else you cannot run mingw32-make in the terminal successfully.
when I finished all above steps, I encount some problems when compiling.(All these problems are gone away when I compile by myself.)
1) libgcc_s_dw2-1.dll and libstdc++-6.dll are missing.
So, I download these two files online, and put them in the directory of compiler in codeblocks. That is, D:\Program Files\CodeBlocks\MinGW\bin (which is the path in my computer).
2) the problem always crashed when calling cvNamedWindow function
2. Install OpenCV with Devcpp
3. Install OpenCV with Microsoft Visual Studio 2008
the steps are similar to what mentioned above.
The bug frustrated me for a long time is you should put the test image correctly.
If you have problem, take a reference to the linnk below:
http://blog.csdn.net/moc062066/article/details/6676117
1)Followed instruction here: http://opencv.willowgarage.com/wiki/InstallGuide
if you download opencv 2.3.1 extract files to a folder. And in this folder there have already compiled files for Mingw, vs9, vs10. So you donot need to use cmake to compile again.(But these files cannot be used successfully, so you need to use CMake to compile again)
2)configue settings for codeblocks
http://opencv.willowgarage.com/wiki/MinGW
after you finish compiling using cmake, please do not delete the cache, else you cannot run mingw32-make in the terminal successfully.
when I finished all above steps, I encount some problems when compiling.(All these problems are gone away when I compile by myself.)
1) libgcc_s_dw2-1.dll and libstdc++-6.dll are missing.
So, I download these two files online, and put them in the directory of compiler in codeblocks. That is, D:\Program Files\CodeBlocks\MinGW\bin (which is the path in my computer).
2) the problem always crashed when calling cvNamedWindow function
2. Install OpenCV with Devcpp
3. Install OpenCV with Microsoft Visual Studio 2008
the steps are similar to what mentioned above.
The bug frustrated me for a long time is you should put the test image correctly.
If you have problem, take a reference to the linnk below:
http://blog.csdn.net/moc062066/article/details/6676117
Monday, January 30, 2012
Meet with Prof. Jana
1. The problem I will focus on: CHANGE DETECTION.
2. Assignments:
1) Read the paper: Toward object discovery and modeling via 3-D scene comparison
http://www.cs.washington.edu/homes/xren/publication/herbst_icra11_scene_differencing.pdf
2) Data set for the paper above
http://www.cs.washington.edu/ai/Mobile_Robotics//projects/object-discovery/
3) Machine learning class
Search You tub lectures and look at the assignments in the syllubus, try do it by yourself.
http://cs229.stanford.edu/schedule.html
Link to Video Lecture: http://academicearth.org/courses/machine-learning
2. Assignments:
1) Read the paper: Toward object discovery and modeling via 3-D scene comparison
http://www.cs.washington.edu/homes/xren/publication/herbst_icra11_scene_differencing.pdf
2) Data set for the paper above
http://www.cs.washington.edu/ai/Mobile_Robotics//projects/object-discovery/
3) Machine learning class
Search You tub lectures and look at the assignments in the syllubus, try do it by yourself.
http://cs229.stanford.edu/schedule.html
Link to Video Lecture: http://academicearth.org/courses/machine-learning
- Introduction (1 class) Basic concepts.
- Supervised learning. (7 classes) Supervised learning setup. LMS.
Logistic regression. Perceptron. Exponential family.
Generative learning algorithms. Gaussian discriminant analysis. Naive Bayes.
Support vector machines. (Understand it as a whole, understand the inputs and outputs, not every details at this stage)
Model selection and feature selection.
Ensemble methods: Bagging, boosting.
Evaluating and debugging learning algorithms.
- Learning theory. (3 classes) Bias/variance tradeoff. Union and Chernoff/Hoeffding bounds.
VC dimension. Worst case (online) learning.
Practical advice on how to use learning algorithms.
- Unsupervised learning. (5 classes) Clustering. K-means.
EM. Mixture of Gaussians.
Factor analysis.
PCA (Principal components analysis).
ICA (Independent components analysis).
- Reinforcement learning and control. (4 classes) MDPs. Bellman equations.
Value iteration and policy iteration.
Linear quadratic regulation (LQR). LQG.
Q-learning. Value function approximation.
Policy search. Reinforce. POMDPs.
4) Geometric Context from Single Image - software - look at the code for generating labels of regions.
Subscribe to:
Posts (Atom)