Over the last couple of months, our team has been working on a massive data structure and process overhaul. The main task what to replace the last remaining real-time data sources. Why is it a big deal? Let's dive in.
Based on some data retrieved from social networks, a real-time API call was needed. A solution that worked quite well, but unfortunately, due to the volume of data, it was slow, we couldn't pass the data through our Data Layer in real-time, and building custom metrics was quite complicated due to the response format (JSON). It was especially problematic if there was an issue with the information delivery. Due to the real-time nature, it was tough to debug the origin of the case. We have decided to move the remaining real-time data sources within our Data Layer infrastructure, with the help of our import workers, effectively changing them to persisted data. Persisted what?
To keep it short, this type of information has been imported via our workers, treated by our Data Layer engine, and stored in our database, becoming easily accessible and queryable.
One of the pillars of quintly, Data Layer, is more than just a database storing information. It's a sophisticated and constantly evolving data curation engine to ensure that the information served to our clients is the most accurate and qualitative. Thanks to moving all of the remaining data sources within our Data Layer, we have achieved:
This iteration is also a huge deal for us internally. It will significantly reduce the time needed for the maintenance, bug tracking, and, most importantly, implementation of the new data sources.
Writing about the data quality and maintenance work involved is very hard. Still, this improvement is significant as it helps us continue providing the most sanitized data for all your analytical needs. Isn't that what we are all here for anyway?