Author: dustinvannoy

I am a Data Engineer, Architect, and Manager. I believe the primary goal of Analytics and Data Engineering is to make peoples’ lives easier by giving them insight into the world around them.

ETL tool vs custom code

I used to help sell an ETL tool that had a graphical drag and drop interface. I really did like the tool because with a little training you could quickly build a basic ETL job. I still like these types of tools if pulling data from a database that has a static or slow changing data model. However, at my current company we do not use an ETL tool because I suggested we are better off without one. While it is possible we will use an ETL tool one day for certain tasks, we currently prefer Python and SQL to move and process our data. The primary reasons we went down this path is for increased flexibility, portability, and maintainability.

One of my top regrets leading a Data Warehousing team that used an ETL tool is that we felt limited by what the tool was capable of doing. Elements of ETL that were not as important when the team started were not easily supported by the tool. The best example of this is reading from a RESTful API. Another was working with JSON data as a source. With these examples we could easily find a tool that can do this for us, but what else will we encounter in the future? At my current company we are consuming RabbitMQ messages and using Kafka for data streaming, and we would not have known to plan for a tool that works well for these use cases. Since we are using Python (and Spark and Scala) we have no limits on what is possible for us to build. There are a lot of libraries that are already built which we can leverage, and we can modify our libraries as new ideas come up rather than being stuck with what a tool provides out of the box. We choose to focus on building a data flow engine in many cases over having one script per table or source. This amount of control over the code that moves data allows us to build up an engine that supports many configurations while keeping the base code backward compatible for data sets already flowing through the system. In many cases we trade off having a longer ramp up period to get our first build working in order to have more flexibility and control down the road, but it cuts down the amount of frustrating rework when systems change.

Another benefit of coding your own ETL is that you can change databases, servers, and data formats without applying changes all over your code base. We have already taken our library for reading SQL Server data and written a similar version that works for Postgres. With how our ETL jobs are set up we just switched out the library import on the relevant scripts and didn’t have to dig in to the logic that was running. I think this leads to better maintainability as well since if you find something is taking up a lot of your time you just build it into the overall system. I remember having alerts at 2 am because the metadata of a table had changed and our ETL tool couldn’t load the data without us refreshing the metadata in the job. With most of our python code we can handle new columns added to the source data and either add that column to the destination table or just ignore it until we decide to modify the destination. This really has decreased time spent getting mad at the system administrators who disrupted our morning by adding a new custom field, though there is still plenty of work to do to ease the pain of changed datatypes and renamed columns.

I am sure there are plenty of different tools out there that do every thing you could want to do (at least according to their sales team), but I love the flexibility, control, and maintainability of writing our own applications to move data. It was worked out well for us as we have transitioned to building out a data platform rather than focusing on just tools to load an analytic data warehouse (but that is a topic for another time).

How To: Kafka 0.9 on Mac

Kafka is a distributed messaging system used for streaming data.  It works as a distributed commit log and if you want to really understand why you should use Kafka then it’s worth the time to read this article by Jay Kreps.  Now if you just want to get hands on with Kafka on your laptop, follow these steps from the quick start guide (which should work on linux also for a sandbox environment).  I didn’t hit errors along the way so this is pretty similar to what is in the documentation but I thought its worth sharing as a reference to the actual commands I used and a place for me to reference when I post more articles about working with Kafka.

  1. Go to http://kafka.apache.org/downloads.html and download the version you want.  I chose kafka_2.11-0.9.0.0.tgz.
  2. Follow instructions here for initial setup: http://kafka.apache.org/documentation.html#quickstart
    1. unzip: tar -xzf kafka_2.11-0.9.0.0.tgz
    2. go to folder: cd kafka_2.11.0.9.0.0
    3. start zookeeper: bin/zookeeper-server-start.sh config/zookeeper.properties
    4. open new terminal window and go to folder
    5. start kafka: bin/kafka-server-start.sh config/server.properties
    6. test creating topic: bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test
    7. test listing a topic: bin/kafka-topics.sh --list --zookeeper localhost:2181
  3. Follow additional steps to get to multiple brokers since you would never use a single broker setup for a real environment (though for a production cluster there would be some different steps and a server per broker, of course)
    1. copy config: cp config/server.properties config/server-1.properties
    2. edit config/server-1.properties:
      broker.id=1
      listeners=PLAINTEXT://:9093
      log.dir=/tmp/kafka-logs-1
    3. copy config again: cp config/server.properties config/server-2.properties
    4. edit config/server-2.properties:
      broker.id=2
      listeners=PLAINTEXT://:9094
      log.dir=/tmp/kafka-logs-2
    5. keep zookeeper running but stop kafka (CMD+C on the terminal it is running under)
    6. run all 3 brokers as background processes:
      bin/kafka-server-start.sh config/server.properties &
      bin/kafka-server-start.sh config/server-1.properties &
      bin/kafka-server-start.sh config/server-2.properties &
    7. test creating topic with replication factor of 3: bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 3 --partitions 1 --topic my-replicated-topic
    8. might as well publish a message to the topic: bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
      {"value": "Test message 1"}
    9. then test out the consumer: bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test --from-beginning

The quick start guide and additional documentation have a lot more info that is worth exploring, but if things went well you now have a local instance to test with.  Congrats!

How To: VirtualBox Shared Folders

When using virtual machines, you will likely want to setup a mapping of a local folder on your computer to a virtual machine folder. This is a good way to move files from your machine onto the VM and vice versa.  Here are steps to set that up with VirtualBox using a Centos image (in this case it is the Cloudera Sandbox VM).

  1. From the VM, go to the VirtualBox menu and choose Devices -> Insert Guest Additions CD Image…VirtualBox Shared Folders 1
  2. If the CD image does not start automatically then select the drive from the file browser and run “autorun.sh”.  This will install the add-ons needed.
  3. Then go to Devices -> Shared Folders -> Shared Folders Settings and setup your folder.  For this example we’ll use a local folder called “installs”.VirtualBox Shared Folders 2VirtualBox Shared Folders 3
  4. Restart virtual machine
  5. You can now find your folder under /media/sf_<foldername> and you’ll probably need elevated permissions.  So for my example the command “sudo ls -l /media/sf_installs” can be used to view files and “sudo cp /media/sf_installs/<filename> ~/” can be used to copy files to a folder local to the VM.VirtualBox Shared Folders 4

Bonus info: Once guest additions are installed you can also setup clipboard sharing to let you copy and paste from your machine to the VM, this is done by going to Devices -> Shared Clipboard and choosing your option (such as Bidirectional).

 

Data Development Environment – Mac

A first step to developing with modern technologies such as Big Data systems and NoSQL systems is getting your development environment setup. I like to have many of the tools available locally on my laptop so I can feel free to experiment without breaking a shared server or running up a large bill on the cloud platform used to host the machine.  Checkout my previous post on setting up on Windows to read a little about what I like about Python and Sublime Text.  For this post, let’s just walk through the tools I found myself installing on my Mac to do Python data development (this excludes Scala setup).

  • Python installed by default – using Python 2.7.10
  • install Homebrew – http://brew.sh/
  • install Developer Tools – on command line type git and follow prompts to install developer tools (http://www.cnet.com/how-to/install-command-line-developer-tools-in-os-x/)
  • install pip – sudo easy_install pip
  • install SublimeText2
  • install PyCharm
  • install several things using Homebrew (type at command line):
    • brew update
    • brew install wget
    • brew install gcc
    • brew install apache-spark
  • pip install virtualenv
    • (then use virtualenv and virtualenvwrapper for most things python)
  • create a virtual environment for data-eng and install
    • pip install pandas
    • pip install request
    • …..(a lot more, might add to this list later with some of the heavily used ones)

Hopefully that helps you with getting started, feel free to leave comments if you hit errors along the way and if I’ve dealt with similar errors I will give you some tips.

Data Development Environment – Windows

A first step to developing with modern technologies such as Big Data systems and NoSQL systems is getting your development environment setup. I like to have many of the tools available locally on my laptop so I can feel free to experiment without breaking a shared server or running up a large bill on the cloud platform used to host the machine.  Originally I started on a Windows machine and being somewhat new to these technologies tried a few paths that didn’t go very well.  Here is some guidance if you are trying to get going with Python and Hadoop or other open source data platforms using a Windows laptop.

Python 2.7

A very popular programming language for processing data is Python and much of the ETL we write uses Python for flexibility (compared to SSIS which relies heavily on knowing the data model). It is simpler than Java and much easier to read, so if maintainability is important (which it should be) then it is a great option. You can use Python 2 or Python 3, but some third party modules are not compatible with Python 3. Python can be installed directly on Windows, Mac, or Linux. I prefer using it with Linux or Mac because of the other command line features and the popularity in the developer community.

Sublime Text 2

This is the text editor I use to write Python modules and edit JSON files, as well as any other type of text file. This is what others on our data team use and when I saw it I was impressed with how easy it makes it to read the code. It is not an IDE, it is an awesome text editor. There are other options for text editors (gedit, emacs, vim), but Sublime Text 2 works well. If you want an IDE instead then PyCharm is one that was recommended in the Python Fundamentals course on Pluralsight.

Oracle VM VirtualBox

After trying to get things working properly with Cygwin Terminal and setting up a Linux VM with VMware Player, I saw VirtualBox was a high ranked virtual machine option and had the easiest time setting it up. One big requirement I had was to be able to copy and paste from my local to the virtual machine and I had trouble getting that capability set up on my VMWare machine.

Linux CentOS 7

Linux works well for developing and running Python code, plus you can install many open source projects on it such as Apache Hadoop. I chose CentOS because of its similarity to RedHat which is supported by most databases and open source projects.  I found many examples for installing Python modules and client libraries on Linux, as well as plenty of information on installing Hadoop as a single node instance. I did not face as many barriers as Cygwin presented, so once I made the jump to Linux I was finally able to focus on the programming instead of the system setup.
So try out this setup and check out my Resources page for ideas of what to learn once the environment is ready and hit me up with questions if you get stuck.

3 Roles in Analytics

From time to time I’ve been asked about the different roles within a data team.  So for anyone wondering here are the three roles I hear most often and my take on what is expected of each including technology I hear most commonly associated with the role.  All these roles play a big part in a good analytics team and do much more than I have taken the time to indicate in this post, but hopefully this is a good overview.
Data Engineer (or Data/ETL Developer)
Role: Build out data systems, get data from various sources (often web APIs, flat files, or databases), transform data, integrate data, and  make it available for analysts to use.  This role is more about building the foundation which the other roles pull from all the time so there is always work to do.  Building out the technology platform and base data structures is a very important step in the analytic process and may involve the most technical programming challenges.  Usually this team picks which type of data system to work with such as Hadoop, SQL Server, PostgreSQL, or Oracle.
Technology: SQL, Python, Hadoop -> Hive, Spark (using Python or Scala)
Data Scientist
Role: Expected to do a lot with analyzing data and the role varies based on company.  We think of this as someone who has a high level of statistics and math training, is able to build analytic models and predictive models, and is able to test an idea against the data and come back knowing if the hypothesis holds true and with what likelihood of error.  This role is usually focused on analytic projects with significant impact on the company.  Some projects can take quite a while and a lot of data processing, quailty evaluation, cleansing, and normalization takes place along the way.  One of the hardest things to learn in academic setting is how to know if your model or other type of results are accurate before company invests money acting on this, but within a company that is an important characteristic of a data scientist.
Technology: SQL, R, Python (with Pandas or other libraries), Spark MLLib or Mahout or other machine learning library
Data Analyst (sometimes called Data Scientist now, especially in the Bay Area)
Role: Expected to analyze data with less focus on statistics and often focused on building reports and dashboards for others to use on a recurring basis.  Often partner with business users to help them come up with meaningful metrics or reports and should be able to quality check data and find anomalies that would be misleading to management if not explained or cleaned up.  This role may play a part in deciding which reporting and data visualization tools the company uses and often tries to get answers to short term questions.
Technology: SQL, Tableau, D3.js, Excel