The world wide 60 billion transaction per day journey

Scale
06/06/2016 - 17:20 to 18:00
Frannz Club
long talk (40 min)
Advanced

Session abstract: 

A case study on how we grew the AudienceScience Helios programmatic advertising management system to > 1 Million transactions per second through our entire data pipeline including Kafka, Storm and Hadoop.

The AudienceScience Helios system acts as:

A real time bidding client to all major exchanges (> 70 integrations world wide) A service for direct integration with publisher’s website / ad server A engine data collection and management of billions of users The system runs world wide across 6 global data centers.

We’ll share what we learned along our journey and how we are moving to even greater scale in the Cloud for 2016

Growing The infrastructure  

We will share lessons learned growing Kafka worldwide and moving 120TB per day across our global network. How we grew our hadoop cluster from 16 nodes then to a 400 node cluster and then 550 node cluster with zero downtime  

We will outline our lessons learned about scaling Hadoop focusing on data design and coding as as well as hardware, monitoring, job management. We will show how we process over 6 billion messages a day in storm with 120K TPS peak on a 60 node cluster.

Moving to The Cloud for even greater scale

Next we can speak to our movement to the AWS Cloud to gain the freedom of operational budgets and instant gratification vs. capital expenditure and physical buildouts. Why we moved from Storm to Spark streaming as a common platform to move from a batch driven model to true Lambda architecture. How/why we are moving Hadoop batch processes on fixed hardware to Spark managed by Quobole on-demand workflows in AWS to gain performance efficiencies and directly drive $ savings.  Lastly we can show how we are move from simple Graphite/Carbon tools to C* for collecting Time Series monitoring Data, and then leveraging this same architecture to process real time bidding feedback from pricing optimization and signal analysis in Spark as part of the overall Lambda architecture.

Video: 

Slide: