Featured

[Paper published] Check out our new paper for image classification with great results using only small sets of training data

Do you have great idea(s) using machine learning but stopped by the fact that you do not have enough training (image) data? Check out our newly accepted KDD workshop paper for a novel solution.

New KDD 2019 MLG (the 15th International Workshop on Mining and Learning with Graphs) Workshop  paper for computer vision and image analysis led by Liping has been accepted:

Image classification using topological features automatically extracted from graph representation of images

A PDF of the paper can be found at the Workshop website or HERE

BibTeX Entry:

@inproceedings{mlg2019_7,
title={Image classification using topological features automatically extracted from graph representation of images},
author={Yang, Liping and Oyen, Diane and Wohlberg, Brendt},
booktitle={Proceedings of the 15th International Workshop on Mining and Learning with Graphs (MLG)},
year={2019}
}

 

Check out this page for Liping’s more publications.

 

Featured

[Paper published] Check out our new (deep) machine learning paper for flood detection

New machine/deep learning paper led by Liping: Analysis of remote sensing imagery for disaster assessment using deep learning: a case study of flooding event

A full-text view-only version of the paper can be found via the link: https://rdcu.be/bpUvx.

 

Check out this page for Liping’s more publications.

 

Featured

[Paper published] Check out our new machine/deep learning paper

New machine/deep learning paper led by Liping: Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review https://t.co/kSF3O71tbD

Through the synthesis of multiple rapidly developing research areas, this systematic review is relevant to multiple research domains, including but not limited to GIScience, computer science, data science, information science, visual analytics, information visualization, image analysis, and computational linguistics, as well as any domains that need to leverage machine learning and deep learning .

Check out this page for Liping’s more publications.

 

Featured

Install Keras with GPU TensorFlow as backend on Ubuntu 16.04

This post introduces how to install Keras with TensorFlow as backend on Ubuntu Server 16.04 LTS with CUDA 8 and a NVIDIA TITAN X (Pascal) GPU, but it should work for Ubuntu Desktop 16.04 LTS.

We gratefully acknowledge the support of NVIDIA Corporation with awarding one Titan X Pascal GPU used for our machine learning and deep learning based research.

Keras is a great choice to learn machine learning and deep learning. Keras has easy syntax and can use Google TensorFlow or Microsoft CNTK or Theano as its backend.  Keras is simply a wrapper around more complex numerical computation engines such as TensorFlow and Theano.

Keras abstracts away much of the complexity of building a deep neural network, leaving us with a very simple, nice, and easy to use interface to rapidly build, test, and deploy deep learning architectures.

TensorFlow is extremely flexible, allowing you to deploy network computation to multiple CPUs, GPUs, servers, or even mobile systems without having to change a single line of code.

This makes TensorFlow an excellent choice for training distributed deep learning networks in an architecture agnostic way.

Now Let’s start on the installation of Keras with TensorFlow as its backend.

1: Setup Python virtual environment

Check my post about more details about how to setup python virtual environment and why it is better to install python libraries in Python virtual environment.

  • Install pip and Virtualenv for python and python 3:
$ sudo apt-get update
$ sudo apt-get install openjdk-8-jdk git python-dev python3-dev python-numpy python3-numpy build-essential python-pip python3-pip python-virtualenv swig python-wheel libcurl3-dev
  • Create a Virtualenv environment in the directory for python and python 3:
#for python 2
virtualenv --system-site-packages -p python ~/keras-tf-venv

# for python 3 
virtualenv --system-site-packages -p python3 ~/keras-tf-venv3

(Note: To delete a virtual environment, just delete its folder.  For example, In our cases, it would be rm -rf keras-tf-venv or rm -rf keras-tf-venv3.)

2: Update & Install NVIDIA Drivers (skip this if you do not need to TensorFlow GPU version)

Check another post I wrote(steps 1-4 in that post) for detailed instructions about how to update and install NVIDIA Drive and CUDA 8.0 and cuDNN for the requirements of GPU TensorFlow installation.

Notes: If you have old version of NVIDIA driver installed used the following to remove it first before installation of new driver.

Step 1: Remove older version of NVIDIA
sudo apt-get purge nvidia*

Step 2: Reboot the system

test whesther it is removed

$ sudo nvidia-smi 
$ sudo: nvidia-smi: command not found  # this means the old driver was uninstalled.

(Note: If you have older version of CUDA and cuDNN installed, check the post for uninstallation.  How to uninstall CUDA Toolkit and cuDNN under Linux? (02/16/2017) (pdf). Actually to uninstall (older version) of CUDA, it tells you how to uninstall it when you install, see the Install cuda 8.0 below.)

(Note: I tried to install the latest Nvidia drive, latest cuda and latest cudnn (i.e., v6.0), but it did not work for me when I installed TensorFlow. After a few testing, I found when I install Nvidia drive 375.82,  cuda_8.0.61_375.26_linux.run, cudnn-8.0-linux-x64-v5.1.tgz. it works.)

Install cuda 8.0:

Toolkit: Installed in /usr/local/cuda-8.0
Samples: Installed in /home/liping, but missing recommended libraries

Please make sure that
– PATH includes /usr/local/cuda-8.0/bin
– LD_LIBRARY_PATH includes /usr/local/cuda-8.0/lib64, or, add /usr/local/cuda-8.0/lib64 to /etc/ld.so.conf and run ldconfig as root

To uninstall the CUDA Toolkit, run the uninstall script in /usr/local/cuda-8.0/bin

Please see CUDA_Installation_Guide_Linux.pdf in /usr/local/cuda-8.0/doc/pdf for detailed information on setting up CUDA.

***WARNING: Incomplete installation! This installation did not install the CUDA Driver. A driver of version at least 361.00 is required for CUDA 8.0 functionality to work.
To install the driver using this installer, run the following command, replacing <CudaInstaller> with the name of this run file:
sudo <CudaInstaller>.run -silent -driver

Logfile is /tmp/cuda_install_3813.log

 

3: Install TensorFlow

Before installing TensorFlow and Keras, be sure to activate your python virtual environment first.

# for python 2
$ source ~/keras-tf-venv/bin/activate  # If using bash
(keras-tf-venv)$  # Your prompt should change

# for python 3
$ source ~/keras-tf-venv3/bin/activate  # If using bash
(keras-tf-venv3)$  # Your prompt should change

 (keras-tf-venv)$ pip install --upgrade tensorflow   # Python 2.7; CPU support (no GPU support)
 (keras-tf-venv3)$ pip3 install --upgrade tensorflow   # Python 3.n; CPU support (no GPU support)
 (keras-tf-venv)$ pip install --upgrade tensorflow-gpu  # Python 2.7;  GPU support
 (keras-tf-venv3)$ pip3 install --upgrade tensorflow-gpu # Python 3.n; GPU support

Note: If the commands for installing TensorFlow given above failed (typically because you invoked a pip version lower than 8.1), install TensorFlow in the active virtualenv environment by issuing a command of the following format:

 (keras-tf-venv)$ pip install --upgrade TF_PYTHON_URL   # Python 2.7
 (keras-tf-venv3)$ pip3 install --upgrade TF_PYTHON_URL  # Python 3.N

where TF_PYTHON_URL identifies the URL of the TensorFlow Python package. The appropriate value of TF_PYTHON_URLdepends on the operating system, Python version, and GPU support. Find the appropriate value for TF_PYTHON_URL for your system here. For example, if you are installing TensorFlow for Linux, Python 2.7, and CPU-only support, issue the following command to install TensorFlow in the active virtualenv environment: (see below for examples. Note that check here to get the latest version for your system.)

#for python 2.7 -- CPU only
(keras-tf-venv)$pip install --upgrade \ https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.1.0-cp27-none-linux_x86_64.whl

#for python 2.7 -- GPU support
(keras-tf-venv)$pip install --upgrade \ https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.1.0-cp27-none-linux_x86_64.whl

# for python 3.5 -- CPU only
(keras-tf-venv3)$ pip3 install --upgrade \
https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.1.0-cp35-cp35m-linux_x86_64.whl

# for python 3.5 -- GPU support
(keras-tf-venv3)$ pip3 install --upgrade \ 
https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.1.0-cp35-cp35m-linux_x86_64.whl
  • Validate your TensorFlow installation. (as I just installed GPU tensorflow, so if you install CPU TensorFlow, the output might be slightly different.)

#For Python 2.7

(keras-tf-venv) :~$ python
Python 2.7.12 (default, Nov 19 2016, 06:48:10) 
[GCC 5.4.0 20160609] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow as tf
>>> hello = tf.constant('Hello, TensorFlow!')
>>> sess = tf.Session()
2017-08-01 14:28:31.257054: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2017-08-01 14:28:31.257090: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2017-08-01 14:28:31.257103: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
2017-08-01 14:28:31.257114: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
2017-08-01 14:28:31.257128: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
2017-08-01 14:28:32.253475: I tensorflow/core/common_runtime/gpu/gpu_device.cc:940] Found device 0 with properties: 
name: TITAN X (Pascal)
major: 6 minor: 1 memoryClockRate (GHz) 1.531
pciBusID 0000:03:00.0
Total memory: 11.90GiB
Free memory: 11.75GiB
2017-08-01 14:28:32.253512: I tensorflow/core/common_runtime/gpu/gpu_device.cc:961] DMA: 0 
2017-08-01 14:28:32.253519: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] 0: Y 
2017-08-01 14:28:32.253533: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Creating TensorFlow device (/gpu:0) -> (device: 0, name: TITAN X (Pascal), pci bus id: 0000:03:00.0)
>>> print(sess.run(hello))
Hello, TensorFlow!
>>> exit()
(keras-tf-venv) :~$ 

#for python 3

(keras-tf-venv3) :~$ python3
Python 3.5.2 (default, Nov 17 2016, 17:05:23) 
[GCC 5.4.0 20160609] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow as tf
>>> hello = tf.constant('Hello, TensorFlow!')
>>> sess = tf.Session()
2017-08-01 13:54:30.458376: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2017-08-01 13:54:30.458413: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2017-08-01 13:54:30.458425: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
2017-08-01 13:54:30.458436: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
2017-08-01 13:54:30.458448: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
2017-08-01 13:54:31.420661: I tensorflow/core/common_runtime/gpu/gpu_device.cc:940] Found device 0 with properties: 
name: TITAN X (Pascal)
major: 6 minor: 1 memoryClockRate (GHz) 1.531
pciBusID 0000:03:00.0
Total memory: 11.90GiB
Free memory: 11.75GiB
2017-08-01 13:54:31.420692: I tensorflow/core/common_runtime/gpu/gpu_device.cc:961] DMA: 0 
2017-08-01 13:54:31.420699: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] 0: Y 
2017-08-01 13:54:31.420712: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Creating TensorFlow device (/gpu:0) -> (device: 0, name: TITAN X (Pascal), pci bus id: 0000:03:00.0)
>>> print(sess.run(hello))
b'Hello, TensorFlow!'
>>> exit() 
(keras-tf-venv3) :~$

If you see the output as below, it indicates your TensorFlow was installed correctly.

Hello, TensorFlow!

3: Install Keras

(Note: Be sure that you activated your python virtual environment before you install Keras.)

Installing Keras is even easier than installing TensorFlow.

First, let’s install a few dependencies:

#for python 2
$ pip install numpy scipy
$ pip install scikit-learn
$ pip install pillow
$ pip install h5py

#for python 3
$ pip3 install numpy scipy
$ pip3 install scikit-learn
$ pip3 install pillow
$ pip3 install h5py

4: Verify that your keras.json file is configured correctly

Let’s now check the contents of our keras.json  configuration file. You can find this file at ~/.keras/keras.json .

use nano to open and edit the file.

$ nano ~/.keras/keras.json

The default values should be something like this:

{
 "epsilon": 1e-07,
 "backend": "tensorflow",
 "floatx": "float32",
 "image_data_format": "channels_last"
}

Can’t find your keras.json file?

On most systems the keras.json  file (and associated subdirectories) will not be created until you open up a Python shell and directly import the keras  package itself.

If you find that the ~/.keras/keras.json  file does not exist on your system, simply open up a shell, (optionally) access your Python virtual environment (if you are using virtual environments), and then import Keras:

#for python 2
$ python
>>> import keras
>>> quit()

#for python 3
$ python3
>>> import keras
>>> quit()

From there, you should see that your keras.json  file now exists on your local disk.

If you see any errors when importing keras  go back to the top of step 4 and ensure your keras.json  configuration file has been properly updated.

5: Test Keras + TensorFlow installation

To verify that Keras + TensorFlow have been installed, simply access the keras_tf  environment using the workon  command, open up a Python shell, and import keras :

Specifically, you can see the text Using TensorFlow backend  display when importing Keras — this successfully demonstrates that Keras has been installed with the TensorFlow backend.

Note: each time you would like to use Keras, you need to activate the virtual environment into which it installed, and when you are done using Keras, deactivate the environment.

# for python 2
(keras-tf-venv)$ deactivate
$  # Your prompt should change back

#for python 3
(keras-tf-venv3)$ deactivate
$  # Your prompt should change back

Note: To delete a virtual environment, just delete its folder. (In this case, it would be rm -rf keras-tf-venv or rm -rf keras-tf-venv3.)

References:

Installing Keras with TensorFlow backend (by on November 14, 2016 in Deep Learning, Libraries, Tutorials)

Installing keras makes tensorflow can’t find GPU

Installing Nvidia, Cuda, CuDNN, TensorFlow and Keras

https://www.tensorflow.org/install/install_linux

Keras as a simplified interface to TensorFlow: tutorial

I: Calling Keras layers on TensorFlow tensors

II: Using Keras models with TensorFlow

III: Multi-GPU and distributed training

IV: Exporting a model with TensorFlow-serving

Featured

Install Node.js on Ubuntu 16.04 LTS

This post provides instructions about how to install Node.js on Ubuntu 16.04 LTS. See this post for Node.js resources. (Node.js Offical Github Repo.)

NPM (Node Package Manager) is the default package manager for the JavaScript runtime environment Node.js. NPM hosts thousands of free node packages. In general, NPM is installed on your computer after you install Node.js.

There are several ways to install Node.js on Ubuntu:

  • Method #1 (our choice in this tutorial): Install Node.js with Node Version Manager (NVM) to manage multiple active Node.js versions

Using nvm, we can install multiple, self-contained versions of Node.js, which will allow us to control our environment, and get access to the newest versions of Node.js, but will also allow us to keep previous releases that our applications may depend on. (nvm is just like Virtualenv in Python, if you are familiar with it, which allows us to install multiple version of the same Python library into “virtual folders” by pip.)

This is the method we will cover later in this tutorial.

  • Method #2: Install the bundled Distro-Stable Version Node.js (version 4.2.6) – it is very simple to install, just one or two commands.

Ubuntu 16.04 contains a version of Node.js in its default repositories that can be used to easily provide a consistent experience across multiple systems. At the time of writing, the version in the repositories is version 4.2.6. This will not be the latest version, but it should be quite stable, and should be sufficient for quick experimentation with the language.

This tutorial picked the Node Version Manager (nvm) based method, because it is much more flexible.

See below for the step by step instructions. (Check out the reading list below if you need the install instructions for other methods listed above.)

Step 0: (Before we get started) Remove old Node package to avoid conflicts

Open a terminal (Ctrl + Alt + T), and type the following command. 

$ dpkg --get-selections | grep node

# If it says install in the right column, Node is on your system:
#ax25-node                                       install

#node                                            install

Step 1: Install prerequisite packages

We’ll need to get the software packages from our Ubuntu repositories that will allow us to build source packages. The nvm script will leverage these tools to build the necessary components.

First, we need to make sure we have a C++ compiler. Open a terminal window (Ctrl + Alt + T) and install the build-essential and libssl-dev packages. By default, Ubuntu does not come with these tools — but they can be installed by the following commands.

$ sudo apt-get update

$ sudo apt-get install build-essential libssl-dev

Step 2: Install nvm

Once the prerequisite packages are installed, we can install and update NVM(Node Version Manager) using cURL. (Note: to get the latest installation version link, on the page scroll down to “Install script”.) 

$ curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.33.2/install.sh | bash

Inspect the installation script with nano:

$ nano install_nvm.sh

#Note that we DO NOT need to add anything in the opened nano text editor window. We just need to create the .sh file. 
# Use Ctrl+O to save the file, and then hit Enter, and then Ctrl +X to close the file.

Run the script with bash (Note that run the following command in your terminal):

$ bash install_nvm.sh

It will install the software into a sub-directory under our home directory at ~/.nvm. It will also add some necessary lines to our ~/.profile file in order to use it.

To have access to the nvm functionality, we need to source the ~/.profile file so that our current session knows about the changes:

$ source ~/.profile

Now that we have nvm installed, we can install isolated Node.js versions.

Step 3: Install  Node.js

The following command will tell us which versions of Node.js are available for us to install:

$ nvm ls-remote
Output
...
    v6.10.3 (Latest LTS: Boron)
 v7.0.0
 v7.1.0
 v7.2.0
 v7.2.1
 v7.3.0
 v7.4.0
 v7.5.0
 v7.6.0
 v7.7.0
 v7.7.1
 v7.7.2
 v7.7.3
 v7.7.4
 v7.8.0
 v7.9.0
 v7.10.0

The newest version when I write this post is v7.10.0. We can install it by the following command:

$ nvm install 7.10.0

By default, nvm will switch to use the most recently installed version. We can explicitly tell nvm to use the version we just installed by the following command:

$ nvm use 7.10.0

When we install Node.js using nvm, the executable is called node (NOT nodejs that you may see in other tutorials). We can check the currently used Node and npm version by the following commands:

$ node -v  
# OR 
$ node --version

# Output
# v7.10.0


$ npm -v
# OR 
$ npm --version

# output
# 4.2.0

Step 4: using nvm to manage different versions of installed Node.js 

If you have multiple Node.js versions, you can see what are installed by the following command:

$ nvm ls

To set a default Node.js version to be used in any new shell, use the alias default command

$ nvm alias default 7.10.0

# This version will be automatically selected when a new session spawns. You can also reference it by the alias like this:

$ nvm use default

To learn more  about the options available to use with nvm, run the following command in your terminal:

$ nvm help

Step 5: using npm to install Node.js modules

Each version of Node.js will keep track of its own packages and has npm available to manage these.

We can use npm to install packages to the Node.js project’s ./node_modules directory by using the normal format. For example, for the express module:

$ npm install express

If you’d like to install it globally (i.e., making it available to other projects using the same Node.js version), you can add the -g flag:

$ npm install -g express

This will install the package in:

~/.nvm/node_version/lib/node_modules/package_name

Note that installing globally will allow us to run the commands from the command line, but we will have to link the package within a project in order to use it in that project:

$ npm link express

 

References and further reading list:

This tutorial covers two methods:

Method #1: Install the bundled distro specif Node.js version 4.2.6

Method #2: Install the latest version of Node.js version 6.x or 7.x

This post is very good — it covers the following ways to install Node.js:

Installing using the Official Repository
Installing using the Github Source Code Clone
Installing using Node Version Manager (NVM)

It covers How to Install Multiple Nodejs version with NVM and also covers how to remove Node.js 

This tutorial covers:

  • How To Install the Distro-Stable Version for Ubuntu
  • How To Install Using a PPA
  • How To Install Using NVM

there are a quite a few ways to get up and running with Node.js on your Ubuntu 16.04 server. Your circumstances will dictate which of the above methods is the best idea for your circumstance. While the packaged version in Ubuntu’s repository is the easiest, the nvm method is definitely much more flexible.

This is a pretty good tutorial. It covers 4 Ways to Install Node.js on Ubuntu. There are several ways to do this, but The author of this post recommended  Option 1: Node Version Manager (nvm). Here is the full list of options:

 

 

DIRA workshop at CVPR 2020 will take place on June 14!

If you are attending CVPR2020, please consider attending the DIRA workshop that Dr. Liping Yang is primary organizing  — a full day workshop scheduled on June 14 (next Sunday)!

We have 7 fantastic keynotes given by top computer vision and machine learning researchers  from USA and UK (including MIT, Stanford, Georgia Tech, IBM, Facebook, U of Pittsburgh and U of Edinburgh). The detailed schedule can be found at HERE.

[Job opening] PhD and Master positions in GIScience and GeoAI

Dr. Liping Yang is offering two (1 Ph.D. and 1 Master) funded, full-time Graduate Assistantships in Geospatial Data Science and Geospatial Artificial Intelligence (GeoAI). Particular topics of interest include

    • Novel methods for analyzing structured and unstructured geospatial data (such as text and images),
    • Information and image retrieval for geographic and historical data,
    • Exploratory search user interfaces powered by computer vision
      and machine learning,
    • Artificial intelligence for geographic knowledge discovery,
    • Spatial representation and reasoning.

Enthusiastic candidates with interests related to these topics are highly encouraged to apply; working experience with at least one programming language (Python, Java, C/C++, JavaScript etc.) is a prerequisite for the positions, but the most important quality is a desire to do creative original research at the intersection of GIScience, computer science, and mathematics.

Students working with Dr. Liping Yang, will have great opportunities for summer interns and/or graduate assistantships at Los Alamos National Laboratory (LANL); after graduation (with proper qualification), can be recommended to work at LANL.

To apply and for more Ph.D. and M.S. positions in GIScience and geography available at the Department of
Geography and Environmental Studies at the University of New Mexico, please check out the PDF here.

Note that Application Deadlines for Fall Admissions:

MS:  February 1 (check here)

PhD:  January 15 (check here)

We look forward to reviewing your applications!

Liping Yang

Assistant Professor of Geographic Information Science 

Department of Geography and Environmental Studies

University of New Mexico

 

email: lipingyang@unm.edu

web: http://www.lipingyang.org/

research blog: http://deeplearning.lipingyang.org/

[Paper published] Novel representation and method for effective zigzag noise denoising

Annoyed by  persistent zigzag noises that cannot be removed after trying many existing denoising methods and techniques?

Check out our newly published ICCV 2019 SGRL paper (SGRL page on CVF) [A PDF of the paper can be found at the CVF website  or HERE] for a novel image representation and method, along with algorithms built upon the representation, for effective denoising of such types of zigzag noises introduced by the digitizing process such as scanning. This type of noise is very common in scanned documents, as well as in some images such as roads with worn road markings.

 

 

 

Check out this page for Liping’s more publications.

 

[Job opening] Outstanding postdoc position for Computer vision and machine learning

I recently received an exciting research grant in computer vision and machine learning. We have an outstanding postdoctoral research associate position. Check the link below for how to apply. We are looking forward to your application.

See HERE on LinkedIn or HERE at LANL website (PDF here if it is not retrievable).

 

 

[Paper published] Check out our new computer vision and image analysis paper for skeleton extraction

New CVPR 2019 Workshop paper for computer vision and image analysis led by Liping has been published:

A Novel Algorithm for Skeleton Extraction From Images Using Topological Graph Analysis. 

A PDF of the paper can be found HERE. (Check HERE if it is not retrievable on the http://openaccess.thecvf.com) [Acceptance rate < 10/32 = 31.25%]

4. Yang, L. and Worboys, M. Generation of navigation graphs for indoor space. International Journal of Geographical Information Science, 29(10): 1737-1756, 2015. [Click here (PDF) to download a draft of this paper]

 

Check out this page for Liping’s more publications.