Over the last couple of months, our team has been working on a massive data structure and process overhaul. The main task what to replace the last remaining real-time data sources. Why is it a big deal? Let's dive in.
Real-time VS persisted data sources.
Based on some data retrieved from social networks, a real-time API call was needed. A solution that worked quite well, but unfortunately, due to the volume of data, it was slow, we couldn't pass the data through our Data Layer in real-time, and building custom metrics was quite complicated due to the response format (JSON). It was especially problematic if there was an issue with the information delivery. Due to the real-time nature, it was tough to debug the origin of the case. We have decided to move the remaining real-time data sources within our Data Layer infrastructure, with the help of our import workers, effectively changing them to persisted data. Persisted what?
To keep it short, this type of information has been imported via our workers, treated by our Data Layer engine, and stored in our database, becoming easily accessible and queryable.
Data Layer
One of the pillars of quintly, Data Layer, is more than just a database storing information. It's a sophisticated and constantly evolving data curation engine to ensure that the information served to our clients is the most accurate and qualitative. Thanks to moving all of the remaining data sources within our Data Layer, we have achieved:
- Robustness: thanks to structuring the information the same way, you are free to combine the data sources the way you want with ease
- Performance improvement, no more ad hoc parsers, we query already sanitized data from one source.
- Transparency: since we control the data on our end, we can track the whole journey and report on it better
- Data quality and consistency: as mentioned above, each data source will undergo treatment from Data Layer, performing various data quality checks and improvements
This iteration is also a huge deal for us internally. It will significantly reduce the time needed for the maintenance, bug tracking, and, most importantly, implementation of the new data sources.
Writing about the data quality and maintenance work involved is very hard. Still, this improvement is significant as it helps us continue providing the most sanitized data for all your analytical needs. Isn't that what we are all here for anyway?
Join the conversation. Leave us a comment below!