Install Apache Solr 6 on Ubuntu 16.04

This post provides the tutorial to set up Apache Solr 6 on Ubuntu 16.04. (install Solr as a service that auto-starts when (re)boot Ubuntu.)

What is Apache Solr? Apache Solr is an open source enterprise-class search platform written in Java which enables you to create custom search engines that index databases, files, and websites. It has back end support for Apache Lucene. It can, for example, be used to search in multiple websites and can show recommendations for the searched content. Solr uses an XML (Extensible Markup Language) based query and result language. There are APIs (Applications program interfaces) available for Python, Ruby and JSON (Javascript Object Notation).

Some other features that Solr provides are:

  • Full-Text Search.
  • Snippet generation and highlighting.
  • Custom Document ordering/ranking.
  • Spell Suggestions.

This tutorial will show you how to install the latest Solr version on Ubuntu 16.04 LTS. The steps will most likely work with later Ubuntu versions as well.

Before Solr 5, Solr doesn’t work alone; it needs a Java servlet container such as Tomcat or Jetty. But after Solr 5, it does not need to run on Tomcat.  

Running Solr on Tomcat (No Longer Supported)

Beginning with Solr 5.0, Support for deploying Solr as a WAR in servlet containers like Tomcat is no longer supported.

For information on how to install Solr as a standalone server, please see Installing Solr.

To give an example:

Things need to do when installing Solr version before 6.

Download and install Tomcat (or some other servlet container)
Setup Tomcat as a service
Download and unpack Solr
Create a SOLR_HOME folder with correct content
copy solr.war into tomcat/webapps
set CATALINA_OPTS=“-Dsolr.solr.home=/path/to/home -Dsolr.x.y=z…. GC-flags etc”
Setup  Tomcat as a service
service tomcat start

With Solr 6.x, we just need to do:

Download Solr and unpack the install-script
solr/bin/install_solr_service solr-6.2.0.tgz  # Install
Tune /etc/default/ to your likings (mem, port, solr-home, Zk etc)
service solr start (or bin/solr start [options])

Your client would talk to Solr on typically as a standalone server, not as one out of many webapps on 8080.

Apache Solr 6 required Java 8 or greater to run.

 There had been lots of scaling improvements in Solr 6.

Now let’s get started with the installation.


Step 1: Update your System

Use a non-root sudo user to login into your Ubuntu server. Through this user, you will have to perform all the steps and use the Solr later.

To update your system, execute the following command to update your system with latest patches and updates.

$ sudo apt-get update 
$ sudo apt-get upgrade -y   #note that this will update your ubuntu OS, skip this if you do not want to update your system.

Step 2: Install Java 

(Apache Solr 6 required Java 8 or greater to run. If you have installed Java 8 or greater on your machine, skip this.)

Solr is a Java application, so Java needs to be installed first in order to set up Solr. See my post for detailed Java 8 installation on Ubuntu 16.04.

Check the version of Java installed by running the following command

$ java -version
java version "1.8.0_131"
Java(TM) SE Runtime Environment (build 1.8.0_131-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)

Step 3: (Manually) install Solr 

Solr can be installed on Ubuntu in different ways, in this tutorial,we will install the latest package.  (If you would like to install the latest package from the source, check out How to install and configure Solr 6 on Ubuntu 16.04.)

Now Let’s download the required Solr version from its official site or mirrors.

First go to this Solr Download page, click the link to the latest version.

You would probably see something looks like the pic shown below. Get the download link you prefer. (for my case, I used this one Click the download link you selected, and then you would see something like the pic shown below.

#If you do not have sudo privilege
#cd /path to one folder under your account 
# and you do not need to add "sudo" in the following commands
cd /opt
sudo wget

Now extract solr service installer shell script from the downloaded Solr archive file and run installer using following commands.

sudo tar xzf solr-6.5.1.tgz solr-6.5.1/bin/ --strip-components=2

Then install Solr as a service using the script:

sudo ./ solr-6.5.1.tgz

The output will be similar to this: [Note that this installation will make Solr as a service that auto-starts when you (re)boot Ubuntu.]

myusername@myserver:/opt$ sudo ./ solr-6.5.1.tgz
id: ‘solr’: no such user
Creating new user: solr
Adding system user `solr’ (UID 117) …
Adding new group `solr’ (GID 126) …
Adding new user `solr’ (UID 117) with group `solr’ …
Creating home directory `/var/solr’ …

Extracting solr-6.5.1.tgz to /opt

Installing symlink /opt/solr -> /opt/solr-6.5.1 …

Installing /etc/init.d/solr script …

Installing /etc/default/ …

Service solr installed.
Customize Solr startup configuration in /etc/default/
● solr.service – LSB: Controls Apache Solr as a Service
Loaded: loaded (/etc/init.d/solr; bad; vendor preset: enabled)
Active: active (exited) since Sun 2017-04-30 11:08:43 EDT; 5s ago
Docs: man:systemd-sysv-generator(8)
Process: 2652 ExecStart=/etc/init.d/solr start (code=exited, status=0/SUCCESS)

Apr 30 11:08:34 myserver systemd[1]: Starting LSB: Controls Apache Solr as a Service…
Apr 30 11:08:34 myserver su[2655]: Successful su for solr by root
Apr 30 11:08:34 myserver su[2655]: + ??? root:solr
Apr 30 11:08:34 myserver su[2655]: pam_unix(su:session): session opened for user solr by (uid=0)
Apr 30 11:08:42 myserver solr[2652]: [194B blob data]
Apr 30 11:08:42 myserver solr[2652]: Started Solr server on port 8983 (pid=2861). Happy searching!
Apr 30 11:08:43 myserver solr[2652]: [14B blob data]
Apr 30 11:08:43 myserver systemd[1]: Started LSB: Controls Apache Solr as a Service.

Step 4:  Start / Stop Solr Service

Use the following command to check the status of the service

$ sudo service solr status

See below for a sample output:

myusername@myserver:/opt$ sudo service solr status
● solr.service - LSB: Controls Apache Solr as a Service
   Loaded: loaded (/etc/init.d/solr; bad; vendor preset: enabled)
   Active: active (exited) since Sun 2017-04-30 11:08:43 EDT; 13min ago
     Docs: man:systemd-sysv-generator(8)
  Process: 2652 ExecStart=/etc/init.d/solr start (code=exited, status=0/SUCCESS)

Apr 30 11:08:34 myserver systemd[1]: Starting LSB: Controls Apache Solr as a Service...
Apr 30 11:08:34 myserver su[2655]: Successful su for solr by root
Apr 30 11:08:34 myserver su[2655]: + ??? root:solr
Apr 30 11:08:34 myserver su[2655]: pam_unix(su:session): session opened for user solr by (uid=0)
Apr 30 11:08:42 myserver solr[2652]: [194B blob data]
Apr 30 11:08:42 myserver solr[2652]: Started Solr server on port 8983 (pid=2861). Happy searching!
Apr 30 11:08:43 myserver solr[2652]: [14B blob data]
Apr 30 11:08:43 myserver systemd[1]: Started LSB: Controls Apache Solr as a Service.


Use the following commands to Start, Stop and check status of Solr service.

$ sudo service solr stop
$ sudo service solr start
$ sudo service solr status


Step 5: Creating a Solr search collection

(Before we create a Solr search collection, check out this post first if you want to change the default port 8983 to another port.)

Using Solr, we can create multiple collections. Run the following command, give the name of your collection (here mysolrcollection) and specify its configurations.

$ sudo su - solr -c "/opt/solr/bin/solr create -c mysolrcollection -n data_driven_schema_configs"

Sample output:

myusername@myserver:/opt$ sudo su - solr -c "/opt/solr/bin/solr create -c mysolrcollection -n data_driven_schema_configs"
 [sudo] password for myusername:

Copying configuration to new core instance directory:

Creating new core 'mysolrcollection' using command:


The new core directory for our first collection has been created. To view the default schema file, got to:

cd /opt/solr/server/solr/configsets/data_driven_schema_configs/conf

You will see some files shown in the picture below.

To view other configuration options , got to:

cd /opt/solr/server/solr/configsets/


Step 6: Use the Solr Web Interface (i.e., Access Solr Admin Panel)

Default Solr runs on port 8983. You can access Solr port in your web browser and you will get Solr dashboard.

The Apache Solr is now accessible on the default port, which is 8983. The admin UI should be accessible at http://your_server_ip:8983/solr. The port should be allowed by your firewall to run the links. 

(If you do not know your IP, check my post to find it out.)

For example:

Or use your machine’s host name if you have one.


Here you can view statics of created collection in previous steps named “mycollection”. Click on “Core Selector” on left sidebar and select created collection.

To see the details of the first collection that we created earlier, select the “mysolrcollection” collection in the left menu.

After you selected the “mysolrcollection” collection, select Documents in the left menu. There you can enter real data in JSON format that will be searchable by Solr. To add more data, copy and paste the following example JSON onto Document field:

"id": 1,
"cars":[ "Ford", "BMW", "Fiat" ]

Note: You can add other formats of data such as CSV etc to Solr. (See the pic below)

Click on the submit document button after adding the data.

Status: success
 "responseHeader": {
 "status": 0,
 "QTime": 758

Now we can click on Query on the left side then click on Execute Query,

We will see something like this:


After successfully installing the Solr Web Interface on Ubuntu, you can now insert the data or query the data with the Solr API and Web Interface.

You can write code to add a large set of documents into Solr. See my post for using Solr with Python. See this post for some useful Solr resources I collected.



Install Tomcat & Solr (You can’t avoid this one) – This is for Solr before version 5, after Solr 5, Tomcat is not required to install Solr.

Apache Solr Reference Guide/ Installing Solr  & Running Solr  & Solr Quick Start (pdf. a very good concise intro, including some basic usages and indexing xml, json, csv files)

Configuring a schema.xml for Solr

First, rename the /opt/solr/solr/collection1 to an understandable name like apples (use whatever name you’d like). (This can be skipped if you installed it using apt-get. In that case, you can execute the following command instead: cd /usr/share/solr):

cd /opt/solr/solr
mv collection1 apples
cd apples

Also, if you installed Solr manually, open the file (nano and change the name to the same name.

Then, remove the data directory and change the schema.xml:

rm -R data
nano conf/schema.xml

Paste your own schema.xml in here.




Choose proper GeForce GPU(s) according to your machine

This post introduces how to choose proper NVIDIA GeForce GPU(s) according to your desktop or workstation.

We gratefully acknowledge the support of NVIDIA Corporation with the donation of (1) Titan X Pascal GPU used for our machine learning and deep learning based research.

It is very important to choose the proper GPUs according to your Desktop / Workstation (The Power Specs of your machine that will house), and also according to the overall computation performance efficiency, including the GPU Engine Specs (esp. how many NVIDIA CUDA Cores) and Memory Specs (e.g., Memory Speed, Standard Memory Config, Memory Bandwidth (GB/sec))GPU(s)) , as well as financial cost.

  • Full Specifications (The Compute Capability of the four GPU graphics cards listed below are all 6.1.)



GeForce GTX 1080 Ti


  • Price



GeForce GTX 1080 Ti

GeForce GTX 1080

When you choose GeForce GPU(s) for your machine, be sure to consider both the power specs of your machine and also the GPU Engine Specs (esp. how many NVIDIA CUDA Cores) and Memory Specs (e.g., Memory Speed, Standard Memory Config, Memory Bandwidth (GB/sec)).

For example, if your machine has one 8pin and two 6pin PCIe power cables, and you have budge around $1200, I would recommend go for two GeForce GTX 1080 cards. In this case, purchasing two GeForce GTX 1080 cards will cost you a little bit less and more importantly it will give you much more computation power comparing with one single NVIDIA TITAN Xp

(Note that two 6pin PCIe power cables can be used as one 8pin PCIe power cable.)

If you machine has one 8pin and one 6pin if you have $700 budget, go for  GeForce GTX 1080 Ti

If you have two 6 pins or one 8pin, or one 8pin and  one 6pin, and you have budge around $600, the best choice would be one GeForce GTX 1080.

In this post I just compared the GPU card above GeForce GTX 1080. For more (combination) options, check the table I given below to find the best configuration according to your machine and the cost that best suitable for you.

(Thanks for Scott and Bob’s help with this.)


GPU Compute Capability
GeForce GTX 1080 Ti 6.1
GeForce GTX 1080 6.1
GeForce GTX 1070 6.1
GeForce GTX 1060 6.1
GeForce GTX 1050 6.1
GeForce GTX TITAN X 5.2
GeForce GTX TITAN Z 3.5
GeForce GTX TITAN Black 3.5
GeForce GTX TITAN 3.5
GeForce GTX 980 Ti 5.2
GeForce GTX 980 5.2
GeForce GTX 970 5.2
GeForce GTX 960 5.2
GeForce GTX 950 5.2
GeForce GTX 780 Ti 3.5
GeForce GTX 780 3.5
GeForce GTX 770 3.0
GeForce GTX 760 3.0
GeForce GTX 750 Ti 5.0
GeForce GTX 750 5.0
GeForce GTX 690 3.0
GeForce GTX 680 3.0
GeForce GTX 670 3.0
GeForce GTX 660 Ti 3.0
GeForce GTX 660 3.0
GeForce GTX 650 Ti BOOST 3.0
GeForce GTX 650 Ti 3.0
GeForce GTX 650 3.0
GeForce GTX 560 Ti 2.1
GeForce GTX 550 Ti 2.1
GeForce GTX 460 2.1
GeForce GTS 450 2.1
GeForce GTS 450* 2.1
GeForce GTX 590 2.0
GeForce GTX 580 2.0
GeForce GTX 570 2.0
GeForce GTX 480 2.0
GeForce GTX 470 2.0
GeForce GTX 465 2.0
GeForce GT 740 3.0
GeForce GT 730 3.5
GeForce GT 730 DDR3,128bit 2.1
GeForce GT 720 3.5
GeForce GT 705* 3.5
GeForce GT 640 (GDDR5) 3.5
GeForce GT 640 (GDDR3) 2.1
GeForce GT 630 2.1
GeForce GT 620 2.1
GeForce GT 610 2.1
GeForce GT 520 2.1
GeForce GT 440 2.1
GeForce GT 440* 2.1
GeForce GT 430 2.1
GeForce GT 430* 2.1

OpenCV installation in virtualenv

This post introduces how to install openCV in virtualenv on Ubuntu 16.04.

Make sure Python and Virtualenv is installed on your Ubuntu first.

Check my post about more details about how to setup python virtual environment and why it is better to install python libraries in Python virtual environment.

  • Install OpenCV inside a virutalenv

activate your virtualenv first, then

if you only  need main modules, run the following commands

$ pip3 install opencv-python

if you need both main and contrib modules (check extra modules listing from OpenCV documentation), run the following commands:

$ pip3 install opencv-contrib-python
  • If you want to install OpenCV on your machine system-wide,

use the following instructions.

$ sudo apt-get install libopencv-dev python-opencv

# if you only  need main modules, run
$ sudo pip3 install opencv-python

# if you need both main and contrib modules (check extra modules listing from OpenCV documentation), run
$ sudo pip3 install opencv-contrib-python


Opencv-python project description (PDF)

NVIDIA TITAN X Pascal vs GTX 1080

This post introduces NVIDIA TITAN X Pascal, GTX 1080, and the comparisons between them.

In order to use TensorFlow with GPU support you must have a NVIDIA graphic card with a minimum compute capability of 3.0.

A single NVIDIA TITAN X Pascal is apparently much more powerful than a GTX 1080 graphics card if we do not consider their cost. But two GTX 1080 GPU cards will outperform a single NVIDIA Ttian X Pascal and from the financial cost perspective, two GTX 1080 will save your some money as well comparing to purchasing a single NVIDIA TITAN X. See my post, choose proper GeForce GPU(s) according to your machine,  for some detailed explanations.


  • NVIDIA Titan X – The fastest accelerator for deep neural network training on a desktop PC based on the revolutionary NVIDIA Pascal architecture

Write and run a bash file

This post introduces how to writ and run a bash file from terminal on Ubuntu. (If you prefer video style tutorials, check here for a post that is video based.)

The bash ( Bourne Again Shell) is the most common shell installed with Linux distributions and Mac OS.

  • Write a bash file

the most common is to write a file, make sure the first line is


Then save the file. Next mark it executable using chmod +x file

Then when you click (or run the file from the terminal) the commands will be executed. By convention these files usually have no extension, however you can make then end in .sh or any other way.

For example,

echo Hello World

A Simple Bash Example

echo "This is a shell script"  
ls -lah  
echo "I am done running ls"  
SOMEVAR='text stuff'  
echo "$SOMEVAR"  
  • Run a bash file

go to the folder where you bash file is located and type:


./ just means that you should call the script located in the current directory. (Alternatively, just type the full path of the If it doesn’t work then, check if has execute permissions.

You can add execute permission by the following command:

$ chmod +x


Why Bother?

Why do you need to learn the command line anyway? Well, let me tell you a story. A few years ago we had a problem where I used to work. There was a shared drive on one of our file servers that kept getting full. I won’t mention that this legacy operating system did not support user quotas; that’s another story. But the server kept getting full and it stopped people from working. One of our software engineers spent the better part of a day writing a C++ program that would look through all the user’s directories and add up the space they were using and make a listing of the results. Since I was forced to use the legacy OS while I was on the job, I installed a Linux-like command line environment for it. When I heard about the problem, I realized I could do all the work this engineer had done with this single line:

du -s * | sort -nr > $HOME/user_space_report.txt

Graphical user interfaces (GUIs) are helpful for many tasks, but they are not good for all tasks. I have long felt that most computers today are not powered by electricity. They instead seem to be powered by the “pumping” motion of the mouse! Computers were supposed to free us from manual labor, but how many times have you performed some task you felt sure the computer should be able to do but you ended up doing the work yourself by tediously working the mouse? Pointing and clicking, pointing and clicking.

I once heard an author say that when you are a child you use a computer by looking at the pictures. When you grow up, you learn to read and write. Welcome to Computer Literacy 101. Now let’s get to work.


  1. What Is “The Shell”?
  2. Navigation
  3. Looking Around
  4. A Guided Tour
  5. Manipulating Files
  6. Working With Commands
  7. I/O Redirection
  8. Expansion
  9. Permissions
  10. Job Control

Here Is Where The Fun Begins

With the thousands of commands available for the command line user, how can you remember them all? The answer is, you don’t. The real power of the computer is its ability to do the work for you. To get it to do that, we use the power of the shell to automate things. We write shell scripts.

What Are Shell Scripts?

In the simplest terms, a shell script is a file containing a series of commands. The shell reads this file and carries out the commands as though they have been entered directly on the command line.

The shell is somewhat unique, in that it is both a powerful command line interface to the system and a scripting language interpreter. As we will see, most of the things that can be done on the command line can be done in scripts, and most of the things that can be done in scripts can be done on the command line.

We have covered many shell features, but we have focused on those features most often used directly on the command line. The shell also provides a set of features usually (but not always) used when writing programs.

Scripts unlock the power of your Linux machine. So let’s have some fun!


  1. Writing Your First Script And Getting It To Work
  2. Editing The Scripts You Already Have
  3. Here Scripts
  4. Variables
  5. Command Substitution And Constants
  6. Shell Functions
  7. Some Real Work
  8. Flow Control – Part 1
  9. Stay Out Of Trouble
  10. Keyboard Input And Arithmetic
  11. Flow Control – Part 2
  12. Positional Parameters
  13. Flow Control – Part3
  14. Errors And Signals And Traps (Oh My!) – Part 1
  15. Errors And Signals And Traps (Oh My!) – Part 2

Ubuntu – Shell script to execute/run (pdf)

wikiHow to Write a Shell Script Using Bash Shell in Ubuntu

How to create & execute a script file [closed]

Advanced Bash-Scripting Guide (An in-depth exploration of the art of shell scripting) by Mendel Cooper


Watch Now: Deep Learning Demystified – by NVIDIA

Watch Now: Deep Learning Demystified  (YouTubeUploaded on Mar 30, 2017) 

Artificial Intelligence (AI) is solving problems that seemed well beyond our reach just a few years back. Using deep learning, the fastest growing segment of AI, computers are now able to learn and recognize patterns from data that were considered too complex for expert written software. Today, deep learning is transforming every industry, including automotive, healthcare, retail and financial services.
This introduction to deep learning will explore key fundamentals and opportunities, as well as current challenges and how to address them.
Highlights include:
  1. Demystifying Artificial Intelligence, Machine Learning and Deep Learning
  2. Key challenges organizations face in adopting this new approach
  3. How GPU deep learning and software, along with training resources, can deliver breakthrough results


Add Presenter 1's Head Shot Image URL (ex:
Director, Developer Marketing, NVIDIA
Will Ramey is NVIDIA’s director of developer marketing. Prior to joining NVIDIA in 2003, he managed an independent game studio and developed advanced technology for the entertainment industry as a product manager and software engineer. He holds a BA in computer science from Willamette University and completed the Japan Studies Program at Tokyo International University. Outside of work, Will learns something new every day, usually from his two kids. He enjoys hiking, camping, open water swimming, and playing The Game.
======Below are some main screenshots from the video:






Install Oracle Java 8 with PPA on Ubuntu 16.04

This post provides the instructions to install Oracle JDK 8 on Ubuntu 16.04. (Notes: Do not install JDK 9 yet, JDK 8 is the latest most stable version.)

(If you are not sure which JDK — OpenJDK or Oracle JDK — to install, check this post for the main difference between them.)

The PPA of Oracle Java for Ubuntu is being maintained by Webupd8 Team. JAVA 8 is released with many of new features and security updates, read more about whats new in Oracle Java 8.

  • Add Oracle’s PPA, then update your package repository.

We need to add webupd8team Java PPA repository onto our system. Then install Oracle Java 8 by issuing the following commands.

$ sudo add-apt-repository ppa:webupd8team/java
$ sudo apt-get update
$ sudo apt-get install oracle-java8-installer

Note that when issuing the command:
sudo add-apt-repository ppa:webupd8team/java
if you get the error:
sudo: add-apt-repository: command not found
do the following:
sudo apt-get install software-properties-common
And then rerun adding your repository.

Note that it is possible to install multiple Java installations on one machine, and set one of installed versions as the default. Check out How To Install Java with Apt-Get on Ubuntu 16.04 (April 23, 2016)  (pdf), in particular the “Managing Java” section.

  • Verify Installed Java Version

After successfully installing Oracle Java, use the following to verify what version we installed.

$ java -version 

java version "1.8.0_121"
Java(TM) SE Runtime Environment (build 1.8.0_121-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.121-b13, mixed mode)
  • Configuring Java Environment and Set the JAVA_HOME Environment Variable

We also need to install java configuration package. The package should come with the latest operating systems during installation of JAVA packages. But it does no harm to run the following command to be sure we have it installed on our machine.

$ sudo apt-get install oracle-java8-set-default

Many programs use the JAVA_HOME environment variable to determine the Java installation location.

Copy the path from your preferred installation and then open /etc/environment configuration file using  nano or your favorite text editor, to set JAVA_HOME environment variable.

sudo nano /etc/environment

At the end of this file, add the following line, making sure to replace the highlighted path with your own copied path.


Save and close the file and exit nano editor environment. (Note: Ctrl+O to save the file, and then hit Enter, and then Ctrl +X to close and exit the file.)

Use the following command to reload the file.

  • source /etc/environment

You can now test whether the environment variable has been set by issuing the following command:


This will return the path you just set.

  • Conclusion

We have now installed Java 8 on our system and set it as default. We can now install software which runs on Java, such as Tomcat and Solr.



How To Install Java with Apt-Get on Ubuntu 16.04 (April 23, 2016)  (pdf)

This is a very good post, it introduced the installation of both OpenJDK and Oracle JDK 6/7/8/9

How to Install Oracle JAVA 8 (JDK/JRE 8u121) on Ubuntu & LinuxMint with PPA (Mar 29, 2017 by Rahul K.)  – pdf


Using Apache Solr with Python

This post provides the instructions to use Apache Solr with Python in different ways.

======using Pysolr

Below are two small python snippets that the author of the post used for testing writing to and reading from a new SOLR server.

The script below will attempt to add a document to the SOLR server.

# Using Python 2.X
from __future__ import print_function  
import pysolr

# Setup a basic Solr instance. The timeout is optional.
solr = pysolr.Solr('', timeout=10)

# How you would index data.
        "id": "doc_1",
        "title": "A very small test document about elmo",

The snippet below will attempt to search for the document that was just added from the snippet above.

# Using Python 2.X
from __future__ import print_function  
import pysolr

# Setup a basic Solr instance. The timeout is optional.
solr = pysolr.Solr('', timeout=10)

results ='elmo')

print("Saw {0} result(s).".format(len(results)))  


======GitHub repos

pysolr is a lightweight Python wrapper for Apache Solr. It provides an interface that queries the server and returns results based on the query.

install Pysolr using pip

pip install pysolr

Multicore Index

Simply point the URL to the index core:

# Setup a Solr instance. The timeout is optional.
solr = pysolr.Solr('http://localhost:8983/solr/core_0/', timeout=10)

SolrClient is a simple python library for Solr; built in python3 with support for latest features of Solr.

Components of SolrClient




Apache Solr resources

Elasticsearch and Apache Solr are open source search engines, and they are the most widely used search servers. This post provides resources about Apache Solr.

Apache Solr is a fast open-source Java search server.

Solr enables you to easily create search engines which searches websites, databases and files.

Solr (pronounced “solar”) is an open source enterprise search platform, written in Java, from the Apache Lucene project. Its major features include full-text search, hit highlighting, faceted search, real-time indexing, dynamic clustering, database integration, NoSQL features and rich document (e.g., Word, PDF) handling. Providing distributed search and index replication, Solr is designed for scalability and fault tolerance. Solr is the second-most popular enterprise search engine after Elasticsearch.

Solr runs as a standalone full-text search server. It uses the Lucene Java search library at its core for full-text indexing and search, and has REST-like HTTP/XML and JSON APIs that make it usable from most popular programming languages. Solr’s external configuration allows it to be tailored to many types of application without Java coding, and it has a plugin architecture to support more advanced customization.

An Elasticsearch / Apache Solr index is the equivalent of a SQL table.

An Elasticsearch or Solr server (aka Solr instance, aka Solr engine) can maintain several indexes.

(Elasticsearch index configuration is done with HTTP / JSON commands. No files required. You define types, mappings, analysis with simple commands.)

In Apache Solr, each index is defined by a schema.xml file (it’s not mandatory in Solr 5/6, but recommended in production), and a solrconfig.xml file. The index schema is equivalent to a SQL table schema definition.  (See this post for Solr Schema related resources.)

An index contains several documents, equivalent to SQL table rows. Each document contains fields, equivalent to SQL table columns.

When an index document is inserted/updated/deleted, we say it is “indexed”.

To retrieve documents from an index, Elasticsearch (json) / Apache Solr (xml, json) provide an http API, with a proprietary syntax.

Elasticsearch and Apache Solr are web applications. A client will use their http API to query or store data.

A full-text search engine is built from the ground to tackle problems that a SQL search find difficult or impossible. The list of those features is huge: multi-language, dedicated plugins to extend the engine, synonyms, stop words, facets, boosts, …

The core search engine of Elasticsearch and Apache Solr is Apache LuceneThe relationship between Elasticsearch / Apache Solr and Lucene, is like that of the relationship between a car and its engine.

You can access Solr admin from your browser: http://localhost:8983/solr/

use the port number used in installation.

See below for some useful Solr related resources:

Check out his Unofficial Solr Guide (e.g., Solr 6.5 Features)


Integrating Solr

Quadro vs GeForce GPUs for training neural networks

If you’re choosing between Quadro and GeForce, definitely pick GeForce. If you’re choosing between Tesla and GeForce, pick GeForce, unless you have a lot of money and could really use the extra RAM.

Quadro GPUs aren’t for scientific computation, Tesla GPUs are. Quadro cards are designed for accelerating CAD, so they won’t help you to train neural nets. They can probably be used for that purpose just fine, but it’s a waste of money.

Tesla cards are for scientific computation, but they tend to be pretty expensive. The good news is that many of the features offered by Tesla cards over GeForce cards are not necessary to train neural networks.

For example, Tesla cards usually have ECC memory, which is nice to have but not a requirement. They also have much better support for double precision computations, but single precision is plenty for neural network training, and they perform about the same as GeForce cards for that.

One useful feature of Tesla cards is that they tend to have is a lot more RAM than comparable GeForce cards. More RAM is always welcome if you’re planning to train bigger models (or use RAM-intensive computations like FFT-based convolutions).

See here for CUDA GPUs on NVIDA Website.