How to Deploy a Django Application on RHEL 7 (May 23, 2017 ) — PDF
How To Set Up Django with Postgres, Nginx, and Gunicorn on CentOS 7 (March 18, 2015)
Liping's machine learning, computer vision, and deep learning home: resources about basics, applications, and many more…
How to Deploy a Django Application on RHEL 7 (May 23, 2017 ) — PDF
How To Set Up Django with Postgres, Nginx, and Gunicorn on CentOS 7 (March 18, 2015)
This post provides tutorials on how to deploy a Django application on a server running Ubuntu.
Nginx will face the outside world. It will serve media files (images, CSS, etc.) directly from the file system. However, it can’t talk directly to Django applications; it needs something that will run the application, feed it requests from the web, and return responses.
That’s Gunicorn‘s job. Gunicorn will create a Unix socket, and serve responses to Nginx via the wsgi protocol – the socket passes data in both directions:
The outside world <-> Nginx <-> The socket <-> Gunicorn
All this family will live into a Virtualenv. Already wondered why Virtualenv is so useful when you develop Pythons’s applications? Continue to read and you will understand.
References
I assume you have a server available on which you have root privileges. I am using a server running Debian 7, so everything here should also work on an Ubuntu server or other Debian-based distribution. If you’re using an RPM-based distro (such as CentOS), you will need to replace the aptitude
commands by their yum
counterparts and if you’re using FreeBSD you can install the components from ports.
Is it possible to run Nginx and Apache at the same time on the same machine?
The answer is YES.
This post provides instructions on how to configure Apache and Nginx to work together on the same machine running Ubuntu.
Nginx and Apache are great and powerful web servers. However, they both have drawbacks; Apache uses up server memory while Nginx (best used for static files) require the help of php-fpm to process dynamic content.
Nginx is an excellent lightweight web server designed to serve high traffic while Apache is another popular web server serving more than half of all active websites in the world. One can combine the two web servers to significant effect, with Nginx serving as static web server front and while Apache is processing the back end. So let’s look into how to configure your Nginx to work with Apache side by side.
Apache and Nginx can definitely run simultaneously. The default config will not allow them to start at the same time because they will both try to listen on the same port and the same IP.However, you can easily either change the ports or ports and IPs, or IPs. There are various ways to make them run either one behind the other(usually Apache behind Nginx because Nginx will be the first entry point in the chain since it’s faster for static resources, and Apache will then only be triggered for some advanced dynamic rendering/processing) or just side by side.
Set different ports for each server. That means you can leave port 80 for Nginx and assign Apache a different port.
Install and configure Nginx that will serve as the front end of your site.
$ sudo apt-get install nginx
Once it has downloaded, configure the virtual host to run on the front end. However, a few changes are required on the configuration file.
Open up the nginx configuration file
$ sudo nano /etc/nginx/sites-available/example
For example, you could tell apache to listen on 127.0.0.1:8080 and instruct Nginx to reverse –proxy traffic to Apache while still serving static content.
Those text in blue is where you should edit according to your server info.
server { listen 127.0.0.1:80; server_name some.name another.dname; access_log /var/log/nginx/something-access.log; location / { proxy_pass http://localhost:8080; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } location ~* ^.+\.(jpg|js|jpeg|png)$ { root /some/where/on/your/disks; } # put your static hosting config here. }
Activate the virtual host.
$ sudo ln -s /etc/nginx/sites-available/example /etc/nginx/sites-enabled/example
Delete the default Nginx server block.
$ sudo rm /etc/nginx/sites-enabled/default
Install the backend which is Apache
$ sudo apt-get install apache2
Apache starts running on port 80 as Nginx is not started.Let’s make Apache listen on a different port so that they can work together.
Open Apache ports.conf file using the below command
$ sudo nano /etc/apache2/ports.conf
Look for the following line;
Listen 127.0.0.1:80
and change it to
Listen 127.0.0.1:8080
Save and Exit.
Next, edit the default virtual host file in Apache.
The <VirtualHost> in this file is set to serve sites only on port 80
$ sudo nano /etc/apache2/sites-available/000-default.conf
Look for the following line,
<VirtualHost 127.0.0.1:80>
then, change it to;
<VirtualHost 127.0.0.1:8080>
Save the file and reload Apache.
$ sudo service apache2 reload
Verify that Apache is now listening on 8080.
$ sudo netstat -tlpn
The output is shown below, with apache2 listening on :::8080.
Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1086/sshd tcp6 0 0 :::8080 :::* LISTEN 4678/apache2
Nginx is configured and running as the front web server on port while Apache is configured to run on the backend on port 8080. Nginx redirects proxy traffic as it still serves static content.
The most important thing we take from this simple configuration is that Apache and Nginx can and do work together. A problem may arise when they both listen to the same ports. By giving them different ports to listen to, your system functionality is assured.
To dynamically monitor NVIDIA GPU usage, here I introduce two methods:
method 1: use nvidia-smi
in your terminal, issue the following command:
$ watch -n 1 nvidia-smi
It will continually update the gpu usage info (every second, you can change the 1 to 2 or the time interval you want the usage info to be updated).
method 2: use the open source monitoring program glances with its GPU monitoring plugin
in your terminal, issue the following command to install glances with its GPU monitoring plugin
$ sudo pip install glances[gpu]
to launch it, in your terminal, issue the following command:
$ sudo glances
Then you should see your GPU usage etc. It also monitors the CPU, disk IO, disk space, network, and a few other things
For more commonly used Linux commands, check my other posts at here and here .
This post provides some resources about Microsoft Auzre Batch AI.
Introduction:
Batch AI is a managed service that enables data scientists and AI researchers to train AI and other machine learning models on clusters of Azure virtual machines, including VMs with GPU support. You describe the requirements of your job, where to find the inputs and store the outputs, and Batch AI handles the rest.
Hands-on tutorials:
Videos:
This post will walk you through how to run a Jupyter notebook script from terminal with tmux (check here for my post about tmux usage).
When you are running Jupyter on a remote server or on cluster/ cloud resources, there are situations where you would like the Jupyter on the remote server or cluster continue running without termination when you shut down your laptop or desktop that you used to access the remote server. tmux will help with this.
In this post, we cover how to let your jupyter notebook running on a remote server continue running without termination via tmux.
Step 1: connect to your remote server with port forwarding
check the Step 5-2 in my post here about setting up Jupyter notebook for how to access your remote server with port forwarding, if you are not familiar with it.
Step 2: install tmux
check here for my post about tmux installation and usage
Step 3: install runipy python package
Check here for runipy installation and usage.
Step 4: in your terminal type the following command, then it will go into tmux window
$ tmux
Step 5: Start jupyter notebook within your tmux session with the following command
$ jupyter notebook --no-browser
The –no-browser option prevents Jupyter from automatically opening a browser window.
Let this terminal stay running.
Step 6: from your laptop, ssh to your remote server (does not need port forwarding this time)
Step 7: cd to where the jupyter notebook script located that you would like to run from terminal
If you do not know what does cd mean and do, check my post for a list of commonly used Linux commands.
Step 8: use the following command to run your ipynb script (this will save the output of each cell back to the notebook file)
$ runipy -o MyNotebook.ipynb
To save the notebook output as a new notebook, run:
$ runipy MyNotebook.ipynb OutputNotebook.ipynb
If your ipynb script without any error itself, it should be running on the server now.
Step 9: Things to pay attention to:
Do not close the terminal where you run the ipynb script within tmux session on your computer that you used to connect to the remote server, that will cause the termination of running the ipynb. But you can make your laptop in sleep or even shut down the computer, the tmux session will keep the ipynb running on your remote server and save the output in the ipynb.
References:
http://forums.fast.ai/t/ipython-notebook-on-a-remote-server-with-tmux/10044/2
This post provides instructions on how to check whether a Jupyter server is running from command line and kill if needed.
Normally, you can kill a Jupyter server from the same terminal window where you launched your Jupyter notebook by hit CTRL + C, then type yes, to shut down the kernels of Your jupyter notebook.
But, there are situations where you want to know whether a Jupyter-notebook running on your remote server, but the Jupyter notebook was started on another desktop (e.g., your office desktop), (and now you are working at home from your laptop, and want to check whether the notebook is still running).
After you login to your Server where you Jupyter notebook was installed and running, you can use the following command to list runing notebooks.
$ jupyter notebook list
You will see a list of running notebooks in the terminal, if you have several running ones.
You can use the following command to kill specific notebook (identified by the port it runs the jupyter) that you would like to stop.
$ jupyter notebook stop 8888
P.S.:
Each server should start on a new port. jupyter notebook list
is reading a set of data files – each notebook server you run writes a file when it starts up, and attempts to remove it when it shuts down. If you see different listed servers on the same port, that means some of them exited without successful removal of the file when it created (for example, unexpected shut down of the notebook would cause this happens).
References:
https://github.com/jupyter/notebook/issues/1950
https://github.com/jupyter/notebook/issues/2844
Normally people run jupyter notebook via browser, but in some situation, we will need to run it from terminal, for example, when running the script takes long time.
This post introduces how to run a jupyter notebook script from terminal.
Solution I:
runipy can do this. runipy
will run all cells in a notebook. If an error occurs, the process will stop.
$ pip3 install runipy # for python 3.x $ pip install runipy # for python 2.x
$ runipy MyNotebook.ipynb
$ runipy -o MyNotebook.ipynb
$ runipy MyNotebook.ipynb OutputNotebook.ipynb
$ runipy MyNotebook.ipynb --html report.html
Solution II:
The latest versions of jupyter comes with the nbconvert
command tool for notebook conversion allows us to do this without any extra packages.
Just go to your terminal and type:
$ jupyter nbconvert --to notebook --execute mynotebook.ipynb --output mynotebook.ipynb
This will open the notebook, execute it, capture new output, and save the result in mynotebook.nbconvert.ipynb
. By default, nbconvert
will abort conversion if any exceptions occur during execution of a cell. If you specify --allow-errors
(in addition to the --execute
flag) then conversion will continue and the output from any exception will be included in the cell output.
if you meet this error,
raise exception(“Cell execution timed out”)
$ jupyter nbconvert --to notebook --execute --allow-errors --ExecutePreprocessor.timeout=180 mynotebook.ipynb
You can use the –inplace flag as well:
$ jupyter nbconvert --to notebook --execute --inplace mynotebook.ipynb
check here for more (updated) usages about nbconvert jupyter command tool.
References:
https://pypi.python.org/pypi/runipy
http://nbconvert.readthedocs.io/en/latest/usage.html#convert-notebook
Can I run Jupyter notebook cells in commandline?
New machine/deep learning paper led by Liping: Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review https://t.co/kSF3O71tbD
Through the synthesis of multiple rapidly developing research areas, this systematic review is relevant to multiple research domains, including but not limited to GIScience, computer science, data science, information science, visual analytics, information visualization, image analysis, and computational linguistics, as well as any domains that need to leverage machine learning and deep learning .
Check out this page for Liping’s more publications.
This posts provides a piece of Python code to sort files, folders, and the combination of files and folders in a given directory. It works for Python 3.x. (It should work for Python 2.x, if you change the syntax of print statement to that of Python 2.x.)
Return the oldest and newest file(s), folder(s), or file(s) +folder(s) in a given directory and sort them by modified time.
import os # change this as the parent directory name of the files you would like to sort path = 'parent_directory_name' if (os.path.isdir(path) and (not os.path.exists(path))): print("the directory does not exist") else: os.chdir(path) # files varialbe contains all files and folders under the path directory files = sorted(os.listdir(os.getcwd()), key=os.path.getmtime) if len(files) == 0: print("there are no regular files or folders in the given directory!") else: #folder list directory_list = [] #regular file list file_list = [] for f in files: if (os.path.isdir(f)): directory_list.append(f) elif (os.path.isfile(f)): file_list.append(f) if len(directory_list) == 0: print("there are no folders in the given directory!") else: oldest_folder = directory_list[0] newest_folder = directory_list[-1] print("Oldest folder:", oldest_folder) print("Newest folder:", newest_folder) print("All folders sorted by modified time -- oldest to newest:", directory_list) if len(file_list) == 0: print("there are no (regular) files in the given directory!") else: oldest_file = file_list[0] newest_file = file_list[-1] print("Oldest file:", oldest_file) print("Newest file:", newest_file) print("All (regular) files sorted by modified time -- oldest to newest:", file_list) if len(file_list) > 0 and len(directory_list) > 0: oldest = files[0] newest = files[-1] print("Oldest (file/folder):", oldest) print("Newest (file/folder):", newest) print("All (file/folder) sorted by modified time -- oldest to newest:", files)
See below for a pic of the code.