Amsterdam gave me a taste of everything - I guess this is what this city is about :).
Organizers clearly

  • knew how to please participants (city central location, very good catering, unlimited beers after speeches),
  • knew how to drag our attention (drones, lot’s of t-shirts and other gadgets to give away),
  • knew how to organize schedule so everyone could find something for himself.

A little bit of this and a little bit of that

Staying away from java-script and docker paths I still tasted a lot of various things.
A little academic knowledge at “A Taste of Random Decision Forests on Apache Spark” by Sean Owen - Did not learn that much about Spark, but it was fun to warm up your brain with a little math, graphs, decision trees and formulas before other talks :).
A little solution insight at “Events storage and analysis with Riak at” by Damien Krotkine and “The Evolution of Hadoop at Spotify - Through Failures and Pain” by Josh Baer and Rafał Wojdyła. More about these later.
A further confirmation that microservice paths are still mostly about preaching and general introductions rather than details, solutions and specifics.
Well - maybe not all of them. In “Developing event-driven microservices with event sourcing and CQRS” Chris Richardson did not limit himself to to give general descriptions (scalling, monoliths vs microservices, event sourcing and snapshots) but tackled also more specific issues that forces many microservice developers to spend sleepless night. Tech problems like atomicity of event publishing and state updating, pros and cons of different event store implementations or consequences of different architecture choices backed up by real life code examples - those were the things I would surely like to see more.

The need for data

Getting back to and Spotify - two presentations, describing real life solutions, that especially got into my head.
And I do remember both of them mostly because of the data volumes they mentioned.
I know that logging and gathering the data is important, but was totally impressed by volumes: 15K events per s and 100GB per h of data to be stored.
It’s a lot to process and a lot to track. Damien reminded us of the importance of visualisation showing their graphs and dashboards. Also showed that with that volume of data we need to rediscover things that we took for granted: json may not be efficient enough for communication, events may need to be aggregated before storing, data distribution encounters network capacity problems.

Events storage and analysis with Riak at


Still being under the impression of presentation I was totally astonished with Spotify volume: 400TB of data generated per day!!!
So what can they do with that amount of data? Josh and Rafał were kind to explain. For once - they can learn user’s behaviour and adopt their services. It’s obvious that Spotify gives song proposals based on your historic choices but also learns your behaviour - that you may prefer hard rock during morning jogging, jazz at a lunch time and classical music when laying in bed. They choose commercials so they are in the same mood like the music you are listening to and with newest feature they can propose your favourite songs matched to your speed rate when running. The power of data.

The Evolution of Hadoop at Spotify - Through Failures and Pain


Summary and Spotify gave away simple recipe for a good presentation: astonish with data, describe a problem, show an evolution of a real life solution. I hope to see more presentation like that in the future. Who knows - maybe we live to see a conference where general ‘solution’ track is replaced by ‘real life production solution’ topics. Hope to see it :).