Quantcast
Channel: Making Pinterest
Viewing all articles
Browse latest Browse all 64

Extra! Extra! How we built Pinterest News

$
0
0

This past summer we launched News, a digest of recent activity of the people you follow. News was built by the Growth team in just two months, and delivered a significant impact in engagement as it helped people discover new content. Here, I outline our strategy, our metrics-driven approach to reduce project risk and the technical challenges.

image

Motivations for building News

Since Pinterest can be used in many different ways, such as travel planning, discovering recipes and finding products, we wanted to help Pinners discover new ways of using the product by seeing the activity of those in their network. This also allowed us to provide another source of interesting content to engage Pinners and help them discover things they may not have been exposed to otherwise.

When scoping out the project it became clear that it would be a significant engineering investment. It required building out the infrastructure to publish, store and rank these events for tens of millions of Pinners as well as building out the feature’s UI on iOS, Android and Web. ROI when evaluating projects is key, and so we got started by calculating how many Pinners we would reach with this feature. We estimated about 80 percent of the user base would receive at least one news item a month, which gave us confidence that it would reach enough Pinners to make an impact. To help mitigate risk, we scoped the project down to a MVP we could use to test the concept before building a fully-fledged feature across all platforms. We made two key decisions:

    • Only build out an infrastructure that could scale to 10 percent of users rather than 100 percent
    • Only build out the feature on iOS initially

    Building the infrastructure

    This project presented a number of scalability challenges. We had to process, store and rank millions of events a day and use that to construct a feed for each individual Pinner. For building out the backend infrastructure, we leveraged two internal services built here at Pinterest: Zen and PinLater.

    Zen is a graph storage service built on top of HBase, which allows us to store and query nodes and edges. We chose to use Zen for a couple reasons. First, it made it really easy to aggregate news items. For instance, if Joe pinned five different Pins, we would want to show that as one news item rather than five separate news items. To achieve this, the schema we used was to create one node per user and one node per news item. If a news item should show up in a user’s feed, we would then create an edge from that user to the news item. Every time a user does an action we check an index keyed on tuple of (actor_user_id, date, action_type) to see if there is an existing news item. If there is, we can simply update the existing news item, otherwise we create a new node for the new item. We also chose Zen because of its ability to retrieve edges ordered by score, which allowed us to rank stories by a combination of chronology and relevance and fetch them in ranked order. This process ensured we always show Pinners fresh and interesting content. The algorithm we used for scoring was in part inspired by Reddit’s scoring algorithm, although we had to come up with more subjective measures of how interesting a news item is.

    image

    To write all of these news items to Zen we used PinLater, a queuing system we’ve built on top of Redis. PinLater allowed us to manage the load on Zen by rate-limiting the queues. It also enabled us to decouple fanning out a news item to a Pinner’s connections from the request the Pinner made to trigger the news item. One tradeoff we made was to delay publishing stories in order to reduce the QPS on Zen. If we published every time someone pinned to all the friends and followers of that user, we would have easily exceeded 200,000 operations per second on Zen. We realized the feed didn’t need to be real-time, so we could significantly reduce the load on Zen by delaying the publishing of new news items. We pre-aggregated news items in memcached for a short period of time and then flushed a single batch of aggregated news items out to feeds of all the user’s followers after that set period of time had elapsed. PinLater’s ability to schedule jobs to run at a specific time in the future allowed us to easily implement this pre-aggregation strategy.

    image

    Scaling and shipping

    Once we had the initial infrastructure and built out the UI on the iOS client, we launched the feature as an experiment to 10 percent of users. The initial results showed it had a strong impact on our engagement metrics.

    Since we had proved the feature on iOS, we quickly moved to build the feature on Android and Web. We also had to invest in making sure the backend architecture could scale to handle processing 100 percent of the more than 100 million events generated daily by our Pinners. Since the Growth team is made up of full-stack engineers we were able to rebalance engineering resources from other projects to quickly get News out.

    With our small and scrappy team, we built out the initial feature on iOS with just a team of two engineers in less than one month. From there we scaled to 100 percent of Pinners across all platforms in six weeks with two full-time engineers and one intern.

    Stay tuned for more from the Pinterest Growth team. If you’re interested in joining us, we’re hiring Growth Engineers!

    John Egan is an engineer on the Growth team.

    For Pinterest engineering news and updates, follow our engineering Pinterest, Facebook and Twitter.


    Viewing all articles
    Browse latest Browse all 64

    Trending Articles