A startup headquartered in Singapore that operates in the field of secure user authentication using biometric and machine learning technologies
Enterprises and security specialists who address two key digital challenges – online fraud and unauthorized access to digital assets
Our client is the breakthrough startup headquartered in Singapore that operates in the field of secure user authentication using biometric and machine learning technologies. Its solution has been designed to help enterprises all around the world effectively address two key digital challenges – online fraud and unauthorized access to digital assets.
Solus Connect features an internal Risk Scoring Module that uses sophisticated AI-based prediction algorithms to detect fraudulent behavior among normal authentification attempts. At the core of the technology, Solus Connect leverages Predictive Machine Learning that takes into account three main inputs – user’s device, facial 3D attributes, and user authentication behavior.
Like in every machine learning platform, Solus Connect was processing an impressive amount of datasets every second while requesting this data from a single database which was growing faster than the database technology could handle at that time. While going through a very rapid growth period, with the number of customers growing on a daily basis, our client faced another challenge coming up from the increased load on the platform – performance and horizontal scaling.
Our team was entrusted with a mission to design and implement a solution that would significantly improve the platform’s performance and ensure that the new database architecture could facilitate an indefinite number of datasets.
Understanding that in this kind of software the main bottleneck to performance optimization and horizontal scaling is laying down deep inside the database architecture design, it was absolutely clear that to properly address our client’s expectation there was no easy tweak. We had to make serious design decisions.
In the aftermath of the quick R&D phase, our team decided to move forward with a horizontal partition of data in the existing database and redesign it using the sharding method which splits single logical datasets into multiple shards (or databases).
This solution allowed us to significantly spread the load within the multiple database nodes and, subsequently, support much larger datasets and transaction volumes.
In a couple of months since the start of the development, our team has completely redesigned the platform’s database architecture using the sharding method. This resulted in an incomparable increase in the overall system’s performance and reliability.