Git Your SQL Together (with a Query Library)

If I could teach SQL to analysts who plan to work in industry data science, I’d start by sharing a few SQL Truths I’ve learned, and why I recommend tracking SQL queries in git. Here goes:

  1. You will *always* need that query again
  2. Queries are living artifacts that change over time
  3. If it’s useful to you, it’s useful to others (and vice versa)

Focusing on these points has led to my continuous adoption of a query library — a git repository for saving and sharing commonly (and uncommonly) used queries, all while tracking any changes made to these queries over time. 

Anyone who is following my personal journey might know that I’m in the midst of building data science infrastructure at a start-up. In addition to setting up a data dictionary, setting up a query library is one of the first things I did to start documenting the institutional knowledge we have about our data.

Let’s talk more about these SQL Truths and why they’ve led me to git my SQL together with query libraries (and how to build one yourself!). 

SQL Truth #1: You will always need that query again.

Have you ever written a query for a “one-off” analysis, deleted it (maybe even on purpose! *shudders*), and lost all memory of how to create said query — just before being asked to re-run or tweak that analysis? Happens to the best of us.

Screen Shot 2018-11-27 at 6.15.00 PM

Usually, this is a job for reproducibility. But, even if we take reproducibility seriously (and we do!), it’s easy for queries from “one-off” analyses to slip through the cracks because they can live outside of our normal project workflow. After all, if you’re only doing something once, there’s no need for it to be reproducible, right?

The sooner we can accept that it’s never just once, the sooner we can hit CTRL+S, put them in a query library, and move on to more interesting problems.

SQL Truth #2: Queries are living artifacts that change over time

Here’s a short list of reasons why your queries might change over time:

  • You’ve become more familiar with database(s)
  • You’ve gained a deeper understanding of your data
  • You’ve had to add filters and caveats over time
  • You’ve changed the way you calculate a metric
  • You’re answering slightly different questions
  • You’ve started collecting new data
  • You’ve found discrepancies or issues with tables or fields
  • Your business model has changed
  • You’ve gotten better at writing SQL

This list isn’t all-inclusive, but hopefully it gives you an idea of how and why your queries might change. This isn’t inherently good or bad; it just means that you’ll want to capture the way that you’re doing things both as documentation for the future and to ensure that changes or updates are applied as needed to keep things up-to-date.

SQL Truth #3: If it’s useful to you, it’s useful to others

Have you ever asked a coworker where some data lives and had them respond with a beautiful, hand-curated SQL query? It’s the best. You get to focus more on the fun analysis, or starting new projects, and as a bonus, your company isn’t paying two people to repeat the same process.

SQL queries contain built up domain knowledge — the way you filter, join, and aggregate tables reflects knowledge you’ve collected about how the data is collected, stored, and should be used for practical application. Sharing queries with others is a nice thing to do, and a good way to spread institutional knowledge about your data (as is building a good data dictionary!).

When you start to compile and compare SQL queries, you might find discrepancies in the way that different people pull the same data. (This happened on a team I was on — if you asked three of us to pull all live clients, we’d all do it with slightly different caveats that reflected our individual understanding of the data. Not a bad thing, but no wonder our numbers didn’t always match!) Creating and reviewing a query library is also good way to get everyone on the same page.

Building your own query library

Let’s focus on how you can build your own query library in a few simple steps:

  1. After writing a query that you’ll want to use again*, save it(!!) as a .sql file with a descriptive name. 

    I tend to name my files based on what the query is accomplishing as output, and since naming things is cheap (and I never have to hand-type the names of these files), I use very descriptive (long) names, like activity_on_mobile_since_june2018_by_client.sql.*This was a test — every query is a query you might want to use again!

  2. Create a git repository in a shared location and upload your queries to it. Encourage your team to do the same, and discuss how to best organize queries with them. 

    This is also a good time for a mini blameless post-mortem of existing queries (especially if they come from different analysts) by looking for discrepancies and using them as an opportunity to level-up everyone’s understanding of the data.

  3. Whenever you create a new query, take a few minutes to clean it up so that others can understand how and when they might want to use it, and upload it to the query repository. 

    Whenever you update an existing query locally, make sure to commit those changes to your query library (ideally along with an explanation of why you made them).

Final Notes

The idea of the query library was introduced to me by Tim, who implemented one for our product analytics team while we were both at Web.com. (You might remember Tim from this post on the career “tour of duty” concept.) It was helpful for our team there, and I’ve happily implemented one on every team since.

I hope this post is helpful, and I’d love to hear from you if you have a data infrastructure tool or idea to share (or a question!); please write a comment below or ping me on Twitter. Thanks for reading!

 

Field Notes: Building Data Dictionaries

The scariest ghost stories I know take place when the history of data — how it’s collected, how it’s used, and what it’s meant to represent — becomes an oral history, passed down as campfire stories from one generation of analysts to another like a spooky game of telephone.

These stories include eerie phrases like “I’m not sure where that comes from”, “I think that broke a few years ago and I’m not sure if it was fixed”, and the ever-ominous “the guy who did that left”. When hearing these stories, one can imagine that a written history of the data has never existed — or if it has, it’s overgrown with ivy and tech-debt in an isolated statuary, never to be used again.

blaze-bonfire-campfire-775673.jpg

The best defense I’ve found against relying on an oral history is creating a written one.

Enter the data dictionary. A data dictionary is a “centralized repository of information about data such as meaning, relationships to other data, origin, usage, and format”, and provides us with a framework to store and share all of the institutional knowledge we have about our data.

As part of my role as a lead data scientist at a start-up, building a data dictionary was one of the first tasks I took on (started during my first week on the job). Learning about our data is a crucial part of onboarding for data-focused roles, and documenting that journey in the form of a data dictionary provides a useful data asset for the company (which helps to preserve institutional knowledge) and simultaneously provides a good resource for analyzing the data. 

My data dictionary is a Google Sheets workbook that looks something like this:

Screen Shot 2018-10-30 at 11.31.56 AM

I use one sheet for each database, and the same fields throughout:

  • Table: the table name, exactly the way it appears in the database
  • Table Notes: general notes on the table, like the theme of the data in the table, how often it gets updated, and where it comes from
  • Field: the field name, exactly as it appears in the database
  • Definition: a user-friendly (often long-form) definition of the field
  • Example value: used to show what data in that field actually looks like
  • Field notes: general notes on the field, sometimes including values, caveats or notes of interest, and places (like tables) to find more information about that field

Yours doesn’t have to look like this, and I’ve seen data dictionaries with other fields and structures, but feel free to borrow mine if you’re looking for a format to start with and riff on — it’s worked well for me so far.

Lessons Learned and Best Practices

I’ve built at least half a dozen data dictionaries for various companies, and through that experience, some personal best practices have shaken out:

1. Start small and iterate

A couple of times while building a data dictionary, I tried to document every piece of data I found while spelunking around my company’s database, which was very painful and not a great use of time. Don’t do this.

Focus on starting with the data that’s important and useful to you and documenting those fields or tables. Then, as you incorporate more data from other tables or databases, focus on documenting those incrementally. Avoid shaving the yak. 

2. Answer your own frequently asked questions

Over time, I’ve found myself asking the same questions over and over again about data lineage and usage, so those are the questions I try to answer when building and filling in a data dictionary. Here’s a basic list of questions to consider:

Screen Shot 2018-10-30 at 1.22.57 PM

3. Consider who will be using your data dictionary and how

The structure, content, and location of your data dictionary should be very different if your context is for analyst use in the trenches vs. for business stakeholder understanding. It could also look different depending on the people or groups who will be using it as a reference.

For example, my data dictionaries tend to contain notes pertaining to analyzing the data that others might not need — things I might edit or remove if I were creating a shared cross-functional resource. I’ve also created versions of data dictionaries that are structured as documents rather than spreadsheets, a format that lends itself well to going very in-depth about fields (if you can’t fit that info in a spreadsheet cell) or sharing with less-technical folks alongside a deliverable (like an analysis). These also make a great first draft to be turned into a more shareable version later.

4. Plan for a living document

For a data dictionary to be useful, it has to be kept up-to-date. This is a challenge that straddles both technical and cultural realms.

Technically, to keep a data dictionary up-to-date, it should to be straightforward for collaborators to access and update. It’s also helpful to be able to see when these updates are made to track important changes. Culturally, the importance of a data dictionary should be acknowledged, and upkeep should be incentivized. It’s easy to skip documentation if it’s not required or seen as important, and this is how documentation grows stale (and eventually dies when deemed worthless).

I’ll freely admit that I have yet to implement a data dictionary that perfectly addresses both of these challenges, but I’m working on it, and others have made good suggestions on the “how” below that I plan to incorporate in my own work.

Improvements + More Discussion

There are a few things about my own data dictionaries that could be improved. Namely, I plan to make my data dictionary more “internally public” (as something like a Github wiki or a Confluence page), and add it to source / version control (git) to track changes.

Some of my biggest concerns are making sure that others know about, have access to, and can use and update any data dictionaries I’m building. Some of this is cultural and some of this is technical, so I’m doing by best to tackle these concerns as I’m building, and asking others for advice along the way.

For more ideas and best practices around data dictionaries, check out these two Twitter threads which are full of great suggestions. If you’ve built a data dictionary and have some best practices to share, or if you have questions about how to get started, please feel free to chime in on Twitter or as a comment here.

 

A Month in the Life of a Data Scientist

“What does a data scientist actually *do*?”
“What kinds of projects do you usually work on?”
“What does a typical day look like?”

These are questions I get asked a lot both by aspiring data scientists and the folks who want to hire them. My answer, in true data scientist fashion, is usually something along the lines of “it depends” — and it’s true! Most of my work involves juggling multiple projects that might have different stakeholders or touch different parts of the company, and the lifecycles of these projects can vary greatly depending on the complexity involved. In the eight years I’ve been doing applied analytics, no two weeks have looked the same. Furthermore, data science is such a growing and varied field that it’s rare two data scientists would give the same answer (even at the same company!).

To help others get a feel for the types of projects a data scientist might do, and a bit of the day-to-day work, I used the 1 Second Everyday app to take a series of one second videos of what my work as a data scientist at an IoT startup looked like during the month of August. Check it out:

For context, my startup installed hardware into retail stores in order to track cardboard displays (the ones you see in CVS, for example, that are stocked with sunscreen or allergy medication) that go in and out of each store. We used sales data from those stores to calculate the incremental sales gained as a result of having these displays up, in addition to tracking other things like the supply chain process, and reported all of this back to the stores and brands whose products were on the displays. Lots of fun data to play with!

August consisted of three main projects for me (with lots of smaller projects thrown in):

  1. Testing of IoT device updates: as an IoT company, we periodically rolled out firmware updates to our hardware, and August was a big update month. We used data to decide which units to update, how to space out the updates, and to monitor updates as they were being rolled out. After the updates, we performed more analysis to see whether the updates were making our RSSI signals stronger than they were before.
  2. Improving our data pipeline: as a startup, we were constantly working to improve our data pipeline — this meant incorporating new data, QA-ing our data inputs and pipeline outputs, chasing down bugs, updating to account for new logical cases and products, and building better documentation to describe what various pieces of the data pipeline were doing. As you can see, I used lots of data visualizations along the way to help us diagnose and improve the pipeline.
  3. Professional development: I was fortunate enough to attend the JupyterCon tutorials and conference in August in NYC (check out my recap here). For me, conferences serve as a place to learn, meet great people, and get inspired by all of the cool things that folks are doing. Also, it’s a lot of fun to do a local R-Ladies dinner, and I had a great time hanging out with NYC R-Ladies.

It’s worth noting again that I had I taken these videos in July, or August, the set of projects I was working on would have been very different. (For example, one month was focused heavily on a classifier algorithm, and the other on creating and evaluating new metrics and ways of matching test and control stores.)

It’s hard to distill the variance of a data scientist’s job into a single video (or set of videos), but I hope this helps to give some insight into the types of projects a data scientist might be tasked with. If you’re interested in reading more about what data scientists really do, I highly recommend Hugo Bowne-Anderson‘s HBR article, which is the result of his conversations with 35+ data scientists on the excellent DataFramed podcast (which I also recommend!). One of the coolest things about being a data scientist right now is how much can vary day-to-day and week-to-week (even at the same job!) — there’s always more to learn and something new to try.

**

PS: Here’s a full description of each snippet, in case you’re curious:

  1. Reviewing a design for a test of our hardware to be run in stores.
  2. Working through my daily to-do list. This one includes incorporating and QA-ing a new set of data into our workflow.
  3. Getting the word out about a panel that a few fellow data scientists and I are pitching for SXSW.
  4. Visualizing test results with violin plots(!). A great way to combine and display data from a test on the distribution of signal strength.
  5. Updating SQL case statements in our data ETL pipeline to account for a new case.
  6. Writing pseudo-code documentation for a classifier so that others can understand the data that goes into it, the logic behind it, and are able to explain it in more simple terms to customers.
  7. A quick shot of a “lab” we use to test equipment before it goes in the field. This was a test-heavy month.
  8. This is the face I make approximately a dozen times per day when I’m questioning something I see in the data. I’m getting wrinkles from making this face.
  9. This was a SQL-heavy month, since we were spending lots of time QA-ing our data.
  10. Using Jupyter to spin up some quick exploratory data visualizations to present to answer a question my team had.
  11. Playing with a cool way to visualize the impact of missing data at JupyterCon.
  12. Dinner with R-Ladies NYC! I’ve had a lot of fun meeting R-Ladies when visiting various cities, and this was no exception — it’s nice to have a built-in group of people to hang out with in cities all around the world.
  13. Swag from the Netflix booth at JupyterCon — read about all of the cool things they’re doing in my Jupytercon recap.
  14. Building and visualizing an ad-hoc analysis request from a client.
  15. After making some changes to our data pipeline, monitoring the DAGs in Apache Airflow to make sure everything updates smoothly.
  16. More data visualization while bug-hunting to spot the source of an error in our data pipeline.

The Coolest Things I Learned at JupyterCon

I’m freshly back from JupyterCon in NY and still feeling the bubbly optimism that comes with bringing all you’ve learned at a conference back to your office. In that spirit, I wanted to share some of the coolest and most interesting things I learned with you all.

One quick note before we dive in: I was able to attend JupyterCon because of a very generous scholarship awarded jointly by JupyterCon and Capital One. I would not have been able to attend otherwise and I’m very grateful to these two groups for their commitment to diversity and inclusion in the tech community, so a big thank you to both groups.

In no particular order, here are some of the most interesting things I learned at JupyterCon:

Jupyter Notebooks, Generally

  • You can add a table of contents to a notebook(!) using nbextensions. (h/t Catherine Ordun)
  • You can parameterize notebooks, create notebook templates for analysis, and schedule them to run automatically with Papermill. (h/t Matthew Seal)
  • There are a few cons to teaching and learning with Jupyter notebooks that are worth knowing and acknowledging. Joel Grus’s ‘I Don’t Like Notebooks.’ was a cautionary tale on the use of Jupyter notebooks for teaching, and while I don’t agree with all of his points, I do think it’s worth the time to go through his deck.
29540207-a3d892fe-86cd-11e7-8476-54c79d9f8d7c
Table of contents via nbextensions

Notebooks in Production (!)

  • Netflix is going all-in on notebooks in production by migrating over 10k workflows to notebooks and using them as a way to bridge the chasm between technical and non-technical users. (h/t Michelle Ufford)
  • Notebooks are, in essence, managed JSON documents with a simple interface to execute code within”. Netflix is putting notebooks into production by combining the JSON properties of notebooks with open-source library Papermill. (h/t Matthew Seal)
  • On debugging: by running a notebook on a notebook server against the same image, you can fix issues without needing to mock the execution environment or code, allowing you to debug locally. (h/t Matthew Seal)
  • On testing: templated notebooks are easy to test with Papermill — just run tests with a scheduler using parameters like what a user would be inputting to hydrate and run the notebook (and look for errors). (h/t Matthew Seal)

Screen Shot 2018-08-27 at 1.26.53 PM

Data Science in Jupyter Notebooks

  • One of my favorite new-to-me ideas is to build your own Kaggle-style board to make iterating and judging performance of internal models more fun (and provide incentive to track them better!). (h/t Catherine Ordun)
  • In graph/network analysis, you can connect nodes using multiple edges (characteristics) using a multigraph. (h/t Noemi Derzsy’s great tutorial on graph/network analysis, which I learned a lot from)
  • There is a ton of research out there around visualization, including on human perception, that can and should be leveraged for creating impactful data visualizations. I highly recommend Bruno Gonçalves’s slide deck as tour de force of what we know about perception and how to apply it to data.
  • In a very cool use of Jupyter notebook widgets, “see” the impact that missing data can have on an analysis (in this case, a linear regression), check out the interactive-plot notebook from Matthew Bremsmissing data repo, which also contains reading materials and a great slide deck.
  • I finally figured out how all of the parts of a matplotlib figure go together thanks to this nifty visualization from the matplotlib documentation. (h/t Bruno Gonçalves)
DlZAZ4UXcAA-kV3.jpg-large
Netflix definitely won the prize for best conference swag.

… So yeah, I learned a lot of really cool things from some very talented people at JupyterCon this year. I’m excited to build new data products, apply network/graph analysis to IoT data, play with widgets, and maybe put a notebook or two into production.

If you’re doing cool things with notebooks at your company, I’d ❤ to hear about them. Feel free to leave a comment here or ping me on Twitter.

[Title image credit: Jason Williams]

Better Allies

During my 7+ years in tech, I’ve encountered sexism, but I’ve also had the good fortune to work with some really great guys. These guys cared about me and my work, and lots of the small things they did to help me (sometimes without realizing they were doing anything!) added up to big things for my career over time.

In hopes that others can follow their example, I’d like to share some of the things that “good guys in tech” have done to help my career:

Sharing positive feedback (with my boss)
Over the years, some of my consulting clients made an effort to praise my work *in front of my boss*, which directly helped me to get promoted.

Each promotion has meant more experience, more skills, and higher compensation — in addition to helping to set me up for the next step(s) in my career.

Recommending me
I got my last job because a guy on my future team recommended that his boss take a look at my resume. My resume got screened out through their recruiting process, so I know that I literally would not have gotten the interview if it weren’t for his recommendation. It’s hard to understate how important that recommendation was for my career.

Similarly, many former coworkers and clients have also written formal recommendations for me on LinkedIn, which helps to build trust in my work and establish credibility beyond a single job or boss.

Sharing salary information
I had a male coworker who started in the same position as me, was promoted to a manager role, then switched to a new company. Before he left, he shared his salary in my position, after being promoted, and at his new company.

This helped me to navigate salary negotiations (both internally and externally), ensure that I was being paid fairly, and to evaluate new job offers. Knowledge is power when it comes to salaries and a little bit of “extra” information can go along way.

Sponsoring me
My last company had a highly selective quarterly awards program. My boss put time and effort into my nomination, then went to his boss to ask that he throw his weight behind the nomination as well to give me a better shot at getting the award. I got the award, and I’m positive that having the extra backing had an impact.

This is the difference between mentorship and sponsorship — a mentor might help you gain the skills to earn an award, but a sponsor will nominate you, then go to bat to personally advocate for you. A sponsor has skin in the game. Women are over-mentored and under-sponsored, and we need people with social and political capital to promote us and help us to advance.

Asking about parental leave policies publicly (and lobbying for better ones!)
During an HR / benefits session, a male coworker asked about parental leave so that I wouldn’t have to. It can be quite awkward to have people assume that you’re pregnant (or soon to be) if you’re talking about parental leave policies, and this saved me from having those uncomfortable conversations.

The same guy also wrote a letter to the CEO citing his experiences and how a flexible schedule helped him and his family during pregnancy and beyond. Parental leave helps everyone!

Promoting my work (even Twitter helps!)
When someone references things I’ve written or retweets the things I’m working on, it helps to amplify my message and build my network.

This also provides potential for new opportunities and conversations with people I may not have reached otherwise. (Plus, it never hurts to have people supporting you and talking about your work!)

Supporting women-focused groups
Before launching R-Ladies Austin, we had several men reach out to see how they could help us grow. They offered time, training materials, books for raffles, advice, meeting places, sponsorships, speaking opportunities, and more. This helped a lot as we were getting established.

Similarly, our local Austin R User Group goes out of their way to promote R-Ladies events without me even asking. This helps us to expand, reach new members, and makes us feel supported and welcome in the tech community.

Empathizing (and humor doesn’t hurt!)
I’ve been lucky to have male coworkers who at least try to “get it” when it comes to gender in tech, and who have had my back during tough moments. Some of my favorite coworkers have made laugh out loud after being frustrated by something casually sexist that a client said. That stuff is the worst, and a little bit of empathy and humor can go a long way.

Working to improve gender ratio at tech events
Quarterly, our R-Ladies group teams up with the larger user group for a joint meetup, and recently we asked for volunteers to give lightning talks and ended up with more speakers than slots (a great problem to have for an organizer).

I was prepared to step down to give my slot to another woman in our group. Instead, the male organizer chose to give up his slot so that the event would have a higher women-to-men gender balance. To me, this small action speaks volumes about the type of inclusive tech community we’re working to build here in Austin.

Being a 50/50 partner at home
My husband has picked up domestic “slack” while I study, organize meetups, attend workshops, and travel to conferences (among other personal pursuits). I’m all about being a hashtag-independent-woman, but the dogs still need to go out even if I’m doing back to back events after work.

The balance of responsibilities shifts from week to week, but in the end, we’re partners and teamwork is what makes it all work. The fact that my husband supports my career and is happy to help out at home makes more things possible.

Asking what men can do (better)
A former boss tries his hardest to promote women in tech, and one of the things he’s done best is ask for specific ways that he can be most helpful. This has lead to lots of productive conversations around things like hiring and mentoring.

The act of asking also lets me know that he’s open to feedback and questions. This list started in large part because he’s constantly asking for concrete, actionable ways that he can help women in tech, and I appreciate that.

**

But wait, there’s more!
All of the above are things that good guys have done for me personally. I asked my network on Twitter whether they had any personal experiences and got lots of feedback. Here are more great (concrete and actionable) ways that men in tech have helped women in my personal network:

  • Helping brainstorm talk proposals (via Stephanie)
  • Offering training to help women build their skillsets (via Susan)
  • Literally just saying “well done”, “that was great”, “I’m impressed”, etc. (via Alexis)
  • Fixing a salary discrepancy after investigating and realizing a woman is underpaid (via Angela)
  • Letting women know that you have confidence in them (via Mara)
  • Supporting women while they are out on maternity leave (via Elana)
  • Asking point-blank, “What is holding you back?” and helping (via Alison)

One more thing
Ladies, if there’s a concrete action that someone has taken to help your career, I’d love to hear about it in the comments or on Twitter [I’ll update this post with any new suggestions].

Also, it’s worth mentioning that none of the above actions are gender specific — plenty of women have helped my career as well (in similar and different ways) — or specific to tech. Anyone can make a big difference on someone’s career — these are just a few ways that some good guys  have helped mine, and I hope they help illustrate small, concrete ways that we can all be better allies to one another.